From rhelv6-list at redhat.com Sat Apr 6 04:34:55 2013 From: rhelv6-list at redhat.com (Red Hat Enterprise Linux 6 (Santiago) discussion mailing-list) Date: Fri, 05 Apr 2013 22:34:55 -0600 Subject: [rhelv6-list] usb 2.0 host devices in kvm? Message-ID: <515FA5EF.8030402@gmail.com> Does Red Hat EL6 support adding usb 2.0 devices from the host to the guests in kvm using libvirt? I have been searching google and have found lots of hints that libvirt might now finally support usb 2.0 but no definite information on rhelv6. I have a USB hasp that I need to access from a virtual machine and USB 1.0 and 1.1 just don't work for it. If not from virt-manager directly, can it be done from the command-line? thanks. From rhelv6-list at redhat.com Mon Apr 8 12:23:28 2013 From: rhelv6-list at redhat.com (Red Hat Enterprise Linux 6 (Santiago) discussion mailing-list) Date: Mon, 8 Apr 2013 13:23:28 +0100 Subject: [rhelv6-list] RH lists munging From: header In-Reply-To: <513E0462.1070803@redhat.com> References: <20130221162237.GA12934@hiwaay.net> <51264D89.4060804@cse.yorku.ca> <20130221191633.29235c9f@gmail.com> <201302251354.r1PDssoC001606@particle.nhn.ou.edu> <20130225155836.4207a584@gmail.com> <201302251523.r1PFNocQ004098@particle.nhn.ou.edu> <20130225212850.758cab14@gmail.com> <201302252108.r1PL8taD004945@particle.nhn.ou.edu> <513DE3D8.1040809@redhat.com> <513E0462.1070803@redhat.com> Message-ID: On 11 March 2013 16:20, Red Hat Enterprise Linux 6 (Santiago) discussion mailing-list wrote: > No problem! > > Looks like some of the list admins are travelling. Please just know we > will try to resolve this as soon as possible. But wanted to give a quick > update for any delays going forward. > > ~rp The admins are on a month-long cruise? Are we ever going to get this fixed? jch -------------- next part -------------- An HTML attachment was scrubbed... URL: From rhelv6-list at redhat.com Mon Apr 8 12:31:42 2013 From: rhelv6-list at redhat.com (Red Hat Enterprise Linux 6 (Santiago) discussion mailing-list) Date: Mon, 8 Apr 2013 07:31:42 -0500 Subject: [rhelv6-list] RH lists munging From: header In-Reply-To: References: <201302251354.r1PDssoC001606@particle.nhn.ou.edu> <20130225155836.4207a584@gmail.com> <201302251523.r1PFNocQ004098@particle.nhn.ou.edu> <20130225212850.758cab14@gmail.com> <201302252108.r1PL8taD004945@particle.nhn.ou.edu> <513DE3D8.1040809@redhat.com> <513E0462.1070803@redhat.com> Message-ID: <20130408123142.GP3575@frodo.gerdesas.com> On Mon, Apr 08, 2013 at 01:23:28PM +0100, Red Hat Enterprise Linux 6 (Santiago) discussion mailing-list wrote: > > The admins are on a month-long cruise? Are we ever going to get this > fixed? I wouldn't go holding my breath waiting for it to occur if I were you; which is a shame since the way things are setup now it's pretty worthless. John -- American youth attributes much more importance to arriving at driver's license age than at voting age. -- Marshall McLuhan (1911-1980), Canadian philosopher of communication theory, Understanding Media (1964) -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 189 bytes Desc: not available URL: From rhelv6-list at redhat.com Mon Apr 8 13:04:58 2013 From: rhelv6-list at redhat.com (Red Hat Enterprise Linux 6 (Santiago) discussion mailing-list) Date: Mon, 8 Apr 2013 09:04:58 -0400 Subject: [rhelv6-list] RH lists munging From: header In-Reply-To: <20130408123142.GP3575@frodo.gerdesas.com> References: <201302251354.r1PDssoC001606@particle.nhn.ou.edu> <20130225155836.4207a584@gmail.com> <201302251523.r1PFNocQ004098@particle.nhn.ou.edu> <20130225212850.758cab14@gmail.com> <201302252108.r1PL8taD004945@particle.nhn.ou.edu> <513DE3D8.1040809@redhat.com> <513E0462.1070803@redhat.com> <20130408123142.GP3575@frodo.gerdesas.com> Message-ID: Ill followup again today. We have discussed this but just want to do what is best for everyone and avoid the spam bots. Ill see if I can get admin rights. Thanks and hope everyone had a wonderful weekend ~rp On Apr 8, 2013 8:39 AM, "Red Hat Enterprise Linux 6 (Santiago) discussion mailing-list" wrote: > On Mon, Apr 08, 2013 at 01:23:28PM +0100, Red Hat Enterprise Linux 6 > (Santiago) discussion mailing-list wrote: > > > > The admins are on a month-long cruise? Are we ever going to get this > > fixed? > > I wouldn't go holding my breath waiting for it to occur if I were you; > which is a shame since the way things are setup now it's pretty > worthless. > > > > > John > -- > American youth attributes much more importance to arriving at driver's > license age than at voting age. > > -- Marshall McLuhan (1911-1980), Canadian philosopher of communication > theory, > Understanding Media (1964) > > _______________________________________________ > rhelv6-list mailing list > rhelv6-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhelv6-list > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rhelv6-list at redhat.com Tue Apr 9 15:41:12 2013 From: rhelv6-list at redhat.com (Red Hat Enterprise Linux 6 (Santiago) discussion mailing-list) Date: Tue, 09 Apr 2013 10:41:12 -0500 Subject: [rhelv6-list] vim settings Message-ID: <51643698.5030500@sivell.com> I have ~/.vimrc with 'set tw=120', but for one particular text file vim keeps going to the next line at the column 80. When I use vim to open a new file, this does not happen. For this particular file, if I use vim -u ~/.vimrc then everything goes fine. And if this file is renamed to a new file with an extension different than .txt, then vim works as expected ( new line at column 120 ). I also deleted .viminfo but this does not help. It must be something in vim that I do not understand. Any advice is greatly appreciated. Thanks, Vu From rhelv6-list at redhat.com Tue Apr 9 15:48:48 2013 From: rhelv6-list at redhat.com (Red Hat Enterprise Linux 6 (Santiago) discussion mailing-list) Date: Tue, 9 Apr 2013 10:48:48 -0500 Subject: [rhelv6-list] vim settings In-Reply-To: <51643698.5030500@sivell.com> References: <51643698.5030500@sivell.com> Message-ID: <20130409154848.GB23311@hiwaay.net> Once upon a time, Red Hat Enterprise Linux 6 (Santiago) discussion mailing-list said: > I have ~/.vimrc with 'set tw=120', but for one particular text file vim > keeps going to the next line at the column 80. When I use vim to open a > new file, this does not happen. > > For this particular file, if I use vim -u ~/.vimrc then everything goes > fine. And if this file is renamed to a new file with an extension > different than .txt, then vim works as expected ( new line at column 120 ). The standard /etc/vimrc includes this bit: " In text files, always limit the width of text to 78 characters autocmd BufRead *.txt set tw=78 If you don't want that (or any of the other autocmd entries from /etc/vimrc), putting the following line in your ~/.vimrc should disable them: augroup! redhat -- Chris Adams Systems and Network Administrator - HiWAAY Internet Services I don't speak for anybody but myself - that's enough trouble. From rhelv6-list at redhat.com Tue Apr 9 16:26:01 2013 From: rhelv6-list at redhat.com (Red Hat Enterprise Linux 6 (Santiago) discussion mailing-list) Date: Tue, 09 Apr 2013 11:26:01 -0500 Subject: [rhelv6-list] vim settings In-Reply-To: <20130409154848.GB23311@hiwaay.net> References: <51643698.5030500@sivell.com> <20130409154848.GB23311@hiwaay.net> Message-ID: <51644119.5010507@sivell.com> On 4/9/2013 10:48 AM, Red Hat Enterprise Linux 6 (Santiago) discussion mailing-list wrote: > Once upon a time, Red Hat Enterprise Linux 6 (Santiago) discussion mailing-list said: >> I have ~/.vimrc with 'set tw=120', but for one particular text file vim >> keeps going to the next line at the column 80. When I use vim to open a >> new file, this does not happen. >> >> For this particular file, if I use vim -u ~/.vimrc then everything goes >> fine. And if this file is renamed to a new file with an extension >> different than .txt, then vim works as expected ( new line at column 120 ). > > The standard /etc/vimrc includes this bit: > > " In text files, always limit the width of text to 78 characters > autocmd BufRead *.txt set tw=78 > > If you don't want that (or any of the other autocmd entries from > /etc/vimrc), putting the following line in your ~/.vimrc should disable > them: > > augroup! redhat > Chris, That fixes it. Thank you, thank you, thank you :) Vu From robinprice at gmail.com Wed Apr 10 20:46:09 2013 From: robinprice at gmail.com (robinprice at gmail.com) Date: Wed, 10 Apr 2013 16:46:09 -0400 Subject: [rhelv6-list] RH lists munging From: header In-Reply-To: References: <201302251354.r1PDssoC001606@particle.nhn.ou.edu> <20130225155836.4207a584@gmail.com> <201302251523.r1PFNocQ004098@particle.nhn.ou.edu> <20130225212850.758cab14@gmail.com> <201302252108.r1PL8taD004945@particle.nhn.ou.edu> <513DE3D8.1040809@redhat.com> <513E0462.1070803@redhat.com> <20130408123142.GP3575@frodo.gerdesas.com> Message-ID: Test On Apr 8, 2013 9:04 AM, "robinprice at gmail.com" wrote: > Ill followup again today. We have discussed this but just want to do what > is best for everyone and avoid the spam bots. Ill see if I can get admin > rights. > > Thanks and hope everyone had a wonderful weekend > > ~rp > On Apr 8, 2013 8:39 AM, "Red Hat Enterprise Linux 6 (Santiago) discussion > mailing-list" wrote: > >> On Mon, Apr 08, 2013 at 01:23:28PM +0100, Red Hat Enterprise Linux 6 >> (Santiago) discussion mailing-list wrote: >> > >> > The admins are on a month-long cruise? Are we ever going to get this >> > fixed? >> >> I wouldn't go holding my breath waiting for it to occur if I were you; >> which is a shame since the way things are setup now it's pretty >> worthless. >> >> >> >> >> John >> -- >> American youth attributes much more importance to arriving at driver's >> license age than at voting age. >> >> -- Marshall McLuhan (1911-1980), Canadian philosopher of communication >> theory, >> Understanding Media (1964) >> >> _______________________________________________ >> rhelv6-list mailing list >> rhelv6-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rhelv6-list >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amyagi at gmail.com Wed Apr 10 21:31:29 2013 From: amyagi at gmail.com (Akemi Yagi) Date: Wed, 10 Apr 2013 14:31:29 -0700 Subject: [rhelv6-list] RH lists munging From: header In-Reply-To: References: <201302251354.r1PDssoC001606@particle.nhn.ou.edu> <20130225155836.4207a584@gmail.com> <201302251523.r1PFNocQ004098@particle.nhn.ou.edu> <20130225212850.758cab14@gmail.com> <201302252108.r1PL8taD004945@particle.nhn.ou.edu> <513DE3D8.1040809@redhat.com> <513E0462.1070803@redhat.com> <20130408123142.GP3575@frodo.gerdesas.com> Message-ID: Passed. :-) On Wed, Apr 10, 2013 at 1:46 PM, robinprice at gmail.com wrote: > Test > > On Apr 8, 2013 9:04 AM, "robinprice at gmail.com" wrote: >> >> Ill followup again today. We have discussed this but just want to do what >> is best for everyone and avoid the spam bots. Ill see if I can get admin >> rights. >> >> Thanks and hope everyone had a wonderful weekend >> >> ~rp >> >> On Apr 8, 2013 8:39 AM, "Red Hat Enterprise Linux 6 (Santiago) discussion >> mailing-list" wrote: >>> >>> On Mon, Apr 08, 2013 at 01:23:28PM +0100, Red Hat Enterprise Linux 6 >>> (Santiago) discussion mailing-list wrote: >>> > >>> > The admins are on a month-long cruise? Are we ever going to get this >>> > fixed? >>> >>> I wouldn't go holding my breath waiting for it to occur if I were you; >>> which is a shame since the way things are setup now it's pretty >>> worthless. >>> >>> >>> >>> >>> John >>> -- >>> American youth attributes much more importance to arriving at driver's >>> license age than at voting age. >>> >>> -- Marshall McLuhan (1911-1980), Canadian philosopher of communication >>> theory, >>> Understanding Media (1964) >>> >>> _______________________________________________ >>> rhelv6-list mailing list >>> rhelv6-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rhelv6-list From akrherz at iastate.edu Fri Apr 12 13:42:16 2013 From: akrherz at iastate.edu (Daryl Herzmann) Date: Fri, 12 Apr 2013 08:42:16 -0500 Subject: [rhelv6-list] RHEL6.2 XFS brutal performence with lots of files In-Reply-To: References: Message-ID: Hi, I figured I'd try to solicit the kind help of the mailing list again on this as I continue to have issues with XFS and RHEL6. For example, I have a 12 TB software RAID5 filesystem on a LSI 92118 and the drives are 3 TB Seagate Barracuda ST3000DM001. This filesystem currently has around 140 million files with many of them smaller than 50 KB. This system is running a fully patched RHEL6.4 Within this filesystem, I have a one particular tree of files I need to remove. There are ~170 folders with around 4-10 sub-folders each and about 1,000 files in each of those sub-folders. Most files are less than 40KB. Attempting to list out one of those top level folders like so: ls -R * | wc -l takes 50 seconds and wc reports 3825 lines (~files). Watching iostat during this operation, the tps value pokes along around 100 to 150 tps. This filesystem is doing other things at the time as well. Just running iostat without args currently reports: avg-cpu: %user %nice %system %iowait %steal %idle 11.12 0.03 2.70 3.60 0.00 82.56 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn md127 134.36 10336.87 11381.45 19674692141 21662893316 If I go into one of these folders with 1,000 files or so in it and attempt to list out the directory cold, it takes 10-15 seconds. Attempting to remove one of the top level folders takes a long time and the other filesystem operations at the time feel very sluggish as well. $ time rm -rf myfolder (this is around 4,000 files total within 6 subfolders of myfolder) real 2m36.925s user 0m0.018s sys 0m0.657s Running hdparm on one of the software raid5 drives reports decent numbers. /dev/sdb: Timing cached reads: 12882 MB in 2.00 seconds = 6448.13 MB/sec Timing buffered disk reads: 396 MB in 3.06 seconds = 129.39 MB/sec running some crude dd tests shows reasonable numbers, I think. # dd bs=1M count=1280 if=/dev/zero of=test conv=fdatasync 1342177280 bytes (1.3 GB) copied, 29.389 s, 45.7 MB/s I have other similiar filesystems on ext4 with similiar hardware and millions of small files as well. I don't see such sluggishness with small files and directories there. I guess I picked XFS for this filesystem initially because of its fast fsck times. Here are some more details on the filesystem # xfs_info /dev/md127 meta-data=/dev/md127 isize=256 agcount=32, agsize=91570816 blks = sectsz=512 attr=2, projid32bit=0 data = bsize=4096 blocks=2930265088, imaxpct=5 = sunit=128 swidth=512 blks naming =version 2 bsize=4096 ascii-ci=0 log =internal bsize=4096 blocks=521728, version=2 = sectsz=512 sunit=8 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 # grep md127 /proc/mounts /dev/md127 /mesonet xfs rw,noatime,attr2,delaylog,sunit=1024,swidth=4096,noquota 0 0 # cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] md127 : active raid5 sdf[3] sde[0] sdd[5] sdc[1] sdb[2] 11721060352 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/5] [UUUUU] Anybody have ideas? or are these still known issues with XFS on RHEL as noted here: http://www.redhat.com/summit/2011/presentations/summit/decoding_the_code/thursday/wheeler_t_0310_billion_files_2011.pdf thanks daryl On Fri, Jun 22, 2012 at 7:45 PM, Daryl Herzmann wrote: > On Tue, Jun 5, 2012 at 3:10 PM, Jussi Silvennoinen > wrote: > >> I've been noticing lots of annoying problems with XFS performance with > >> RHEL6.2 on 64bit. I typically have 20-30 TB file systems with data > >> structured in directories based on day of year, product type, for > example, > >> > >> /data/2012/06/05/product/blah.gif > >> > >> Doing operations like tar or rm over these directories bring the system > to > >> a grinding halt. Load average goes vertical and eventually the power > button > >> needs to be pressed in many cases :( A hack workaround is to break > apart the > >> task into smaller chunks and let the system breath in between > operations... > >> > >> Anyway, I read Ric Wheeler's "Billion Files" with great interest > >> > >> > >> > http://www.redhat.com/summit/2011/presentations/summit/decoding_the_code/thursday/wheeler_t_0310_billion_files_2011.pdf > >> > >> It appears there are 'known issues' with XFS and RHEL6.1. It does not > >> appear these issues were addressed in RHEL 6.2? > >> > >> Does anybody know if these issues were addressed in the upcoming RHEL > 6.3? > >> My impression is that upstream fixes for this only recently (last 6 > months?) > >> appeared in the mainline kernel. > >> > >> Perhaps I am missing some tuning that could be done to help with this? > > > > > > Enabling lazy-count does wonders for workloads that involve massive > amounts > > of metadata. Unfortunately it's a mkfs-time option only AFAIK. > > Thanks, but it was already enabled... > > daryl > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jussi_rhel6 at silvennoinen.net Mon Apr 15 08:50:35 2013 From: jussi_rhel6 at silvennoinen.net (Jussi Silvennoinen) Date: Mon, 15 Apr 2013 11:50:35 +0300 (EEST) Subject: [rhelv6-list] RHEL6.2 XFS brutal performence with lots of files In-Reply-To: References: Message-ID: > avg-cpu: ?%user ? %nice %system %iowait ?%steal ? %idle > ? ? ? ? ? 11.12 ? ?0.03 ? ?2.70 ? ?3.60 ? ?0.00 ? 82.56 > > Device: ? ? ? ? ? ?tps ? Blk_read/s ? Blk_wrtn/s ? Blk_read ? Blk_wrtn > md127 ? ? ? ? ? 134.36 ? ? 10336.87 ? ? 11381.45 19674692141 21662893316 Do use iostat -x to see more details which will give a better indication how busy the disks are. > Running hdparm on one of the software raid5 drives reports decent numbers. > > /dev/sdb: > ?Timing cached reads: ? 12882 MB in ?2.00 seconds = 6448.13 MB/sec > ?Timing buffered disk reads: ?396 MB in ?3.06 seconds = 129.39 MB/sec > > running some crude dd tests shows reasonable numbers, I think. > > # dd bs=1M count=1280 if=/dev/zero of=test conv=fdatasync > 1342177280 bytes (1.3 GB) copied, 29.389 s, 45.7 MB/s Yes crude and not very useful. > I have other similiar filesystems on ext4 with similiar hardware and > millions of small files as well. ?I don't see such sluggishness with small > files and directories there. ?I guess I picked XFS for this filesystem > initially because of its fast fsck times. Are those other systems also employing software raid? In my experience, swraid is painfully slow with random writes. And your workload in this use case is exactly that. > # grep md127 /proc/mounts? > /dev/md127 /mesonet xfs > rw,noatime,attr2,delaylog,sunit=1024,swidth=4096,noquota 0 0 inode64 is not used, I suspect it would have helped alot. Enabling it afterwards will not help for data which is already on the disk but it will help with new files. -- Jussi From akrherz at iastate.edu Mon Apr 15 12:58:30 2013 From: akrherz at iastate.edu (Daryl Herzmann) Date: Mon, 15 Apr 2013 07:58:30 -0500 Subject: [rhelv6-list] RHEL6.2 XFS brutal performence with lots of files In-Reply-To: References: Message-ID: Good morning, Thanks for the response and the fun never stops! This system crashed on Saturday morning with the following <4>------------[ cut here ]------------ <2>kernel BUG at include/linux/swapops.h:126! <4>invalid opcode: 0000 [#1] SMP <4>last sysfs file: /sys/kernel/mm/ksm/run <4>CPU 7 <4>Modules linked in: iptable_filter ip_tables nfsd nfs lockd fscache auth_rpcgss nfs_acl sunrpc bridge stp llc ip6t_REJECT nf_conntrack_ipv6 nf_defrag_ipv6 xt_state nf_conntrack ip6table_filter ip6_tables ipv6 xfs exportfs vhost_net macvtap macvlan tun kvm_intel kvm raid456 async_raid6_recov async_pq power_meter raid6_pq async_xor dcdbas xor microcode serio_raw async_memcpy async_tx iTCO_wdt iTCO_vendor_support i7core_edac edac_core sg bnx2 ext4 mbcache jbd2 sr_mod cdrom sd_mod crc_t10dif pata_acpi ata_generic ata_piix wmi mpt2sas scsi_transport_sas raid_class dm_mirror dm_region_hash dm_log dm_mod [last unloaded: speedstep_lib] <4> <4>Pid: 4581, comm: ssh Not tainted 2.6.32-358.2.1.el6.x86_64 #1 Dell Inc. PowerEdge T410/0Y2G6P <4>RIP: 0010:[] [] migration_entry_wait+0x181/0x190 <4>RSP: 0000:ffff8801c1703c88 EFLAGS: 00010246 <4>RAX: ffffea0000000000 RBX: ffffea0003bf6f58 RCX: ffff880236437580 <4>RDX: 00000000001121fd RSI: ffff8801c040e5d8 RDI: 000000002243fa3e <4>RBP: ffff8801c1703ca8 R08: ffff8801c040e5d8 R09: 0000000000000029 <4>R10: ffff8801d6850200 R11: 00002ad7d96cbf5a R12: ffffea0007bdec18 <4>R13: 0000000236437580 R14: 0000000236437067 R15: 00002ad7d76b0000 <4>FS: 00002ad7dace2880(0000) GS:ffff880028260000(0000) knlGS:0000000000000000 <4>CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 <4>CR2: 00002ad7d76b0000 CR3: 00000001bb686000 CR4: 00000000000007e0 <4>DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 <4>DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 <4>Process ssh (pid: 4581, threadinfo ffff8801c1702000, task ffff880261aa7500) <4>Stack: <4> ffff88024b5f22d8 0000000000000000 000000002243fa3e ffff8801c040e5d8 <4> ffff8801c1703d88 ffffffff811441b8 0000000000000000 ffff8801c1703d08 <4> ffff8801c1703eb8 ffff8801c1703dc8 ffff880328cb48c0 0000000000000040 <4>Call Trace: <4> [] handle_pte_fault+0xb48/0xb50 <4> [] ? sock_aio_write+0x19b/0x1c0 <4> [] ? __pagevec_free+0x44/0x90 <4> [] handle_mm_fault+0x23a/0x310 <4> [] __do_page_fault+0x139/0x480 <4> [] ? vfs_ioctl+0x22/0xa0 <4> [] ? unmap_region+0x110/0x130 <4> [] ? do_vfs_ioctl+0x84/0x580 <4> [] do_page_fault+0x3e/0xa0 <4> [] page_fault+0x25/0x30 <4>Code: e8 f5 2f fc ff e9 59 ff ff ff 48 8d 53 08 85 c9 0f 84 44 ff ff ff 8d 71 01 48 63 c1 48 63 f6 f0 0f b1 32 39 c1 74 be 89 c1 eb e3 <0f> 0b eb fe 66 66 2e 0f 1f 84 00 00 00 00 00 55 48 89 e5 48 83 <1>RIP [] migration_entry_wait+0x181/0x190 <4> RSP It rebooted itself, now I must have some filesytem corruption as this is being dumped frequently: XFS (md127): page discard on page ffffea0003c95018, inode 0x849ec442, offset 0. XFS: Internal error XFS_WANT_CORRUPTED_RETURN at line 342 of file fs/xfs/xfs_alloc.c. Caller 0xffffffffa02986c2 Pid: 1304, comm: xfsalloc/7 Not tainted 2.6.32-358.2.1.el6.x86_64 #1 Call Trace: [] ? xfs_error_report+0x3f/0x50 [xfs] [] ? xfs_alloc_ag_vextent_size+0x482/0x630 [xfs] [] ? xfs_alloc_lookup_eq+0x19/0x20 [xfs] [] ? xfs_alloc_fixup_trees+0x236/0x350 [xfs] [] ? xfs_alloc_ag_vextent_size+0x482/0x630 [xfs] [] ? xfs_alloc_ag_vextent+0xad/0x100 [xfs] [] ? xfs_alloc_vextent+0x2bc/0x610 [xfs] [] ? xfs_bmap_btalloc+0x267/0x700 [xfs] [] ? find_busiest_queue+0x69/0x150 [] ? xfs_bmap_alloc+0xe/0x10 [xfs] [] ? xfs_bmapi_allocate_worker+0x4a/0x80 [xfs] [] ? xfs_bmapi_allocate_worker+0x0/0x80 [xfs] [] ? worker_thread+0x170/0x2a0 [] ? autoremove_wake_function+0x0/0x40 [] ? worker_thread+0x0/0x2a0 [] ? kthread+0x96/0xa0 [] ? child_rip+0xa/0x20 [] ? kthread+0x0/0xa0 [] ? child_rip+0x0/0x20 XFS (md127): page discard on page ffffea0003890fa0, inode 0x849ec441, offset 0. Anyway, to respond to your questions: On Mon, Apr 15, 2013 at 3:50 AM, Jussi Silvennoinen < jussi_rhel6 at silvennoinen.net> wrote: > avg-cpu: %user %nice %system %iowait %steal %idle >> 11.12 0.03 2.70 3.60 0.00 82.56 >> >> Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn >> md127 134.36 10336.87 11381.45 19674692141 21662893316 >> > > Do use iostat -x to see more details which will give a better indication > how busy the disks are. # iostat -x Linux 2.6.32-358.2.1.el6.x86_64 (iem21.local) 04/15/2013 _x86_64_ (16 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 10.33 0.00 3.31 2.24 0.00 84.11 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 3.48 1002.05 22.42 33.26 1162.56 8277.06 169.55 6.52 117.17 2.49 13.86 sdc 3805.96 173.47 292.94 28.83 33747.35 1611.10 109.89 3.47 10.74 0.82 26.46 sde 3814.91 174.53 285.98 29.92 33761.01 1628.96 112.03 5.70 17.97 0.97 30.63 sdb 3813.98 173.45 284.85 28.66 33745.12 1609.93 112.77 4.07 12.94 0.91 28.48 sdd 3805.78 174.18 294.19 29.35 33754.41 1621.14 109.34 3.81 11.73 0.84 27.32 sdf 3813.80 173.68 285.46 29.04 33751.91 1614.36 112.45 4.70 14.91 0.93 29.17 md127 0.00 0.00 21.75 45.85 4949.72 5919.63 160.78 0.00 0.00 0.00 0.00 but I suspect this is inflated, since it just completed a raid5 resync. > I have other similiar filesystems on ext4 with similiar hardware and >> millions of small files as well. I don't see such sluggishness with small >> files and directories there. I guess I picked XFS for this filesystem >> initially because of its fast fsck times. >> > > Are those other systems also employing software raid? In my experience, > swraid is painfully slow with random writes. And your workload in this use > case is exactly that. Some of them are and some aren't. I have an opportunity to move this workload to a hardware RAID5, so I may just do that and cut my losses :) > # grep md127 /proc/mounts >> /dev/md127 /mesonet xfs >> rw,noatime,attr2,delaylog,**sunit=1024,swidth=4096,noquota 0 0 >> > > inode64 is not used, I suspect it would have helped alot. Enabling it > afterwards will not help for data which is already on the disk but it will > help with new files. Thanks for the tip, I'll try that out. daryl -------------- next part -------------- An HTML attachment was scrubbed... URL: From riehecky at fnal.gov Mon Apr 15 13:39:11 2013 From: riehecky at fnal.gov (Pat Riehecky) Date: Mon, 15 Apr 2013 08:39:11 -0500 Subject: [rhelv6-list] RHEL6.2 XFS brutal performence with lots of files In-Reply-To: References: Message-ID: <516C02FF.80908@fnal.gov> I've run into some terrible performance when I've had a lot of add/remove actions on the filesystem in parallel. They were mostly due to fragmentation. Alas, XFS can get some horrid fragmentation. xfs_db -c frag -r /dev/ should give you the stats on its fragmentation. I can't speak for others, but I've got 'xfs_fsr' linked into /etc/cron.weekly/ on my personal systems with large XFS filesystems. Pat On 04/15/2013 07:58 AM, Daryl Herzmann wrote: > Good morning, > > Thanks for the response and the fun never stops! This system crashed on > Saturday morning with the following > > <4>------------[ cut here ]------------ > <2>kernel BUG at include/linux/swapops.h:126! > <4>invalid opcode: 0000 [#1] SMP > <4>last sysfs file: /sys/kernel/mm/ksm/run > <4>CPU 7 > <4>Modules linked in: iptable_filter ip_tables nfsd nfs lockd fscache > auth_rpcgss nfs_acl sunrpc bridge stp llc ip6t_REJECT nf_conntrack_ipv6 > nf_defrag_ipv6 xt_state nf_conntrack ip6table_filter ip6_tables ipv6 xfs > exportfs vhost_net macvtap macvlan tun kvm_intel kvm raid456 > async_raid6_recov async_pq power_meter raid6_pq async_xor dcdbas xor > microcode serio_raw async_memcpy async_tx iTCO_wdt iTCO_vendor_support > i7core_edac edac_core sg bnx2 ext4 mbcache jbd2 sr_mod cdrom sd_mod > crc_t10dif pata_acpi ata_generic ata_piix wmi mpt2sas scsi_transport_sas > raid_class dm_mirror dm_region_hash dm_log dm_mod [last unloaded: speedstep_lib] > <4> > <4>Pid: 4581, comm: ssh Not tainted 2.6.32-358.2.1.el6.x86_64 #1 Dell Inc. > PowerEdge T410/0Y2G6P > <4>RIP: 0010:[] [] > migration_entry_wait+0x181/0x190 > <4>RSP: 0000:ffff8801c1703c88 EFLAGS: 00010246 > <4>RAX: ffffea0000000000 RBX: ffffea0003bf6f58 RCX: ffff880236437580 > <4>RDX: 00000000001121fd RSI: ffff8801c040e5d8 RDI: 000000002243fa3e > <4>RBP: ffff8801c1703ca8 R08: ffff8801c040e5d8 R09: 0000000000000029 > <4>R10: ffff8801d6850200 R11: 00002ad7d96cbf5a R12: ffffea0007bdec18 > <4>R13: 0000000236437580 R14: 0000000236437067 R15: 00002ad7d76b0000 > <4>FS: 00002ad7dace2880(0000) GS:ffff880028260000(0000) knlGS:0000000000000000 > <4>CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 > <4>CR2: 00002ad7d76b0000 CR3: 00000001bb686000 CR4: 00000000000007e0 > <4>DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 > <4>DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 > <4>Process ssh (pid: 4581, threadinfo ffff8801c1702000, task ffff880261aa7500) > <4>Stack: > <4> ffff88024b5f22d8 0000000000000000 000000002243fa3e ffff8801c040e5d8 > <4> ffff8801c1703d88 ffffffff811441b8 0000000000000000 ffff8801c1703d08 > <4> ffff8801c1703eb8 ffff8801c1703dc8 ffff880328cb48c0 0000000000000040 > <4>Call Trace: > <4> [] handle_pte_fault+0xb48/0xb50 > <4> [] ? sock_aio_write+0x19b/0x1c0 > <4> [] ? __pagevec_free+0x44/0x90 > <4> [] handle_mm_fault+0x23a/0x310 > <4> [] __do_page_fault+0x139/0x480 > <4> [] ? vfs_ioctl+0x22/0xa0 > <4> [] ? unmap_region+0x110/0x130 > <4> [] ? do_vfs_ioctl+0x84/0x580 > <4> [] do_page_fault+0x3e/0xa0 > <4> [] page_fault+0x25/0x30 > <4>Code: e8 f5 2f fc ff e9 59 ff ff ff 48 8d 53 08 85 c9 0f 84 44 ff ff ff > 8d 71 01 48 63 c1 48 63 f6 f0 0f b1 32 39 c1 74 be 89 c1 eb e3 <0f> 0b eb fe > 66 66 2e 0f 1f 84 00 00 00 00 00 55 48 89 e5 48 83 > <1>RIP [] migration_entry_wait+0x181/0x190 > <4> RSP > > It rebooted itself, now I must have some filesytem corruption as this is > being dumped frequently: > > XFS (md127): page discard on page ffffea0003c95018, inode 0x849ec442, offset 0. > XFS: Internal error XFS_WANT_CORRUPTED_RETURN at line 342 of file > fs/xfs/xfs_alloc.c. Caller 0xffffffffa02986c2 > > Pid: 1304, comm: xfsalloc/7 Not tainted 2.6.32-358.2.1.el6.x86_64 #1 > Call Trace: > [] ? xfs_error_report+0x3f/0x50 [xfs] > [] ? xfs_alloc_ag_vextent_size+0x482/0x630 [xfs] > [] ? xfs_alloc_lookup_eq+0x19/0x20 [xfs] > [] ? xfs_alloc_fixup_trees+0x236/0x350 [xfs] > [] ? xfs_alloc_ag_vextent_size+0x482/0x630 [xfs] > [] ? xfs_alloc_ag_vextent+0xad/0x100 [xfs] > [] ? xfs_alloc_vextent+0x2bc/0x610 [xfs] > [] ? xfs_bmap_btalloc+0x267/0x700 [xfs] > [] ? find_busiest_queue+0x69/0x150 > [] ? xfs_bmap_alloc+0xe/0x10 [xfs] > [] ? xfs_bmapi_allocate_worker+0x4a/0x80 [xfs] > [] ? xfs_bmapi_allocate_worker+0x0/0x80 [xfs] > [] ? worker_thread+0x170/0x2a0 > [] ? autoremove_wake_function+0x0/0x40 > [] ? worker_thread+0x0/0x2a0 > [] ? kthread+0x96/0xa0 > [] ? child_rip+0xa/0x20 > [] ? kthread+0x0/0xa0 > [] ? child_rip+0x0/0x20 > XFS (md127): page discard on page ffffea0003890fa0, inode 0x849ec441, offset 0. > > Anyway, to respond to your questions: > > > On Mon, Apr 15, 2013 at 3:50 AM, Jussi Silvennoinen > > wrote: > > avg-cpu: %user %nice %system %iowait %steal %idle > 11.12 0.03 2.70 3.60 0.00 82.56 > > Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn > md127 134.36 10336.87 11381.45 19674692141 21662893316 > > > Do use iostat -x to see more details which will give a better indication > how busy the disks are. > > > # iostat -x > Linux 2.6.32-358.2.1.el6.x86_64 (iem21.local) 04/15/2013 _x86_64_(16 CPU) > > avg-cpu: %user %nice %system %iowait %steal %idle > 10.33 0.00 3.31 2.24 0.00 84.11 > > Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz > avgqu-sz await svctm %util > sda 3.48 1002.05 22.42 33.26 1162.56 8277.06 169.55 > 6.52 117.17 2.49 13.86 > sdc 3805.96 173.47 292.94 28.83 33747.35 1611.10 109.89 > 3.47 10.74 0.82 26.46 > sde 3814.91 174.53 285.98 29.92 33761.01 1628.96 112.03 > 5.70 17.97 0.97 30.63 > sdb 3813.98 173.45 284.85 28.66 33745.12 1609.93 112.77 > 4.07 12.94 0.91 28.48 > sdd 3805.78 174.18 294.19 29.35 33754.41 1621.14 109.34 > 3.81 11.73 0.84 27.32 > sdf 3813.80 173.68 285.46 29.04 33751.91 1614.36 112.45 > 4.70 14.91 0.93 29.17 > md127 0.00 0.00 21.75 45.85 4949.72 5919.63 160.78 > 0.00 0.00 0.00 0.00 > > but I suspect this is inflated, since it just completed a raid5 resync. > > I have other similiar filesystems on ext4 with similiar hardware and > millions of small files as well. I don't see such sluggishness with > small > files and directories there. I guess I picked XFS for this filesystem > initially because of its fast fsck times. > > > Are those other systems also employing software raid? In my experience, > swraid is painfully slow with random writes. And your workload in this > use case is exactly that. > > > > Some of them are and some aren't. I have an opportunity to move this > workload to a hardware RAID5, so I may just do that and cut my losses :) > > # grep md127 /proc/mounts > /dev/md127 /mesonet xfs > rw,noatime,attr2,delaylog,sunit=1024,swidth=4096,noquota 0 0 > > > inode64 is not used, I suspect it would have helped alot. Enabling it > afterwards will not help for data which is already on the disk but it > will help with new files. > > > Thanks for the tip, I'll try that out. > > daryl > > > > _______________________________________________ > rhelv6-list mailing list > rhelv6-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhelv6-list -- Pat Riehecky Scientific Linux developer http://www.scientificlinux.org/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From akrherz at iastate.edu Mon Apr 15 14:02:28 2013 From: akrherz at iastate.edu (Daryl Herzmann) Date: Mon, 15 Apr 2013 09:02:28 -0500 Subject: [rhelv6-list] RHEL6.2 XFS brutal performence with lots of files In-Reply-To: <516C02FF.80908@fnal.gov> References: <516C02FF.80908@fnal.gov> Message-ID: Thanks for your help. On Mon, Apr 15, 2013 at 8:39 AM, Pat Riehecky wrote: > I've run into some terrible performance when I've had a lot of > add/remove actions on the filesystem in parallel. They were mostly due to > fragmentation. Alas, XFS can get some horrid fragmentation. > > xfs_db -c frag -r /dev/ > > should give you the stats on its fragmentation. > # xfs_db -c frag -r /dev/md127 actual 140539575, ideal 139998847, fragmentation factor 0.38% Here's an iostat snapshot while I was running xfs_db, the tps numbers for md127 seem strange. sd[b-f] are a part of the raid5.... avg-cpu: %user %nice %system %iowait %steal %idle 5.12 0.00 7.12 8.81 0.00 78.94 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 35.00 1032.00 656.00 1032 656 sdc 458.00 32451.00 680.00 32451 680 sde 451.00 30936.00 544.00 30936 544 sdb 573.00 32448.00 784.00 32448 784 sdd 527.00 30728.00 696.00 30728 696 sdf 593.00 31736.00 624.00 31736 624 md127 157986.00 157983.00 1592.00 157983 1592 > I can't speak for others, but I've got 'xfs_fsr' linked into > /etc/cron.weekly/ on my personal systems with large XFS filesystems. > Seems like I shouldn't have to do that given the numbers above? daryl -- > Pat Riehecky > > Scientific Linux developerhttp://www.scientificlinux.org/ > > > _______________________________________________ > rhelv6-list mailing list > rhelv6-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhelv6-list > -------------- next part -------------- An HTML attachment was scrubbed... URL: From riehecky at fnal.gov Mon Apr 15 14:09:09 2013 From: riehecky at fnal.gov (Pat Riehecky) Date: Mon, 15 Apr 2013 09:09:09 -0500 Subject: [rhelv6-list] RHEL6.2 XFS brutal performence with lots of files In-Reply-To: References: <516C02FF.80908@fnal.gov> Message-ID: <516C0A05.2060001@fnal.gov> On 04/15/2013 09:02 AM, Daryl Herzmann wrote: > Thanks for your help. > > On Mon, Apr 15, 2013 at 8:39 AM, Pat Riehecky > wrote: > > I've run into some terrible performance when I've had a lot of > add/remove actions on the filesystem in parallel. They were mostly due > to fragmentation. Alas, XFS can get some horrid fragmentation. > > xfs_db -c frag -r /dev/ > > should give you the stats on its fragmentation. > > > # xfs_db -c frag -r /dev/md127 > actual 140539575, ideal 139998847, fragmentation factor 0.38% > > Here's an iostat snapshot while I was running xfs_db, the tps numbers for > md127 seem strange. sd[b-f] are a part of the raid5.... > > avg-cpu: %user %nice %system %iowait %steal %idle > 5.12 0.00 7.12 8.81 0.00 78.94 > > Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn > sda 35.00 1032.00 656.00 1032 656 > sdc 458.00 32451.00 680.00 32451 680 > sde 451.00 30936.00 544.00 30936 544 > sdb 573.00 32448.00 784.00 32448 784 > sdd 527.00 30728.00 696.00 30728 696 > sdf 593.00 31736.00 624.00 31736 624 > md127 157986.00 157983.00 1592.00 157983 1592 > > I can't speak for others, but I've got 'xfs_fsr' linked into > /etc/cron.weekly/ on my personal systems with large XFS filesystems. > > > Seems like I shouldn't have to do that given the numbers above? > > daryl > I would agree, your drive looks nice and optimized..... this thread already has you checking for lazy counts.... those are my two "tricks" for getting XFS to run beautifully. But I've never had a volume over 8TB before..... Pat -- Pat Riehecky -------------- next part -------------- An HTML attachment was scrubbed... URL: From brilong at cisco.com Mon Apr 15 14:30:17 2013 From: brilong at cisco.com (Brian Long (brilong)) Date: Mon, 15 Apr 2013 14:30:17 +0000 Subject: [rhelv6-list] RHEL6.2 XFS brutal performence with lots of files In-Reply-To: References: <516C02FF.80908@fnal.gov> Message-ID: What is the underlying hardware for this 20-30TB XFS filesystem that comes to a halt each time you try large removals? Is this server-class hardware with SAS disks or a desktop-class motherboard with a bunch of large SATA disks? I wonder if the underlying hardware could be part of the problem. Are you using onboard SATA or a SATA add-on card? /Brian/ From akrherz at iastate.edu Mon Apr 15 15:01:29 2013 From: akrherz at iastate.edu (Daryl Herzmann) Date: Mon, 15 Apr 2013 10:01:29 -0500 Subject: [rhelv6-list] RHEL6.2 XFS brutal performence with lots of files In-Reply-To: References: <516C02FF.80908@fnal.gov> Message-ID: Thanks for the response. On Mon, Apr 15, 2013 at 9:30 AM, Brian Long (brilong) wrote: > What is the underlying hardware for this 20-30TB XFS filesystem that comes > to a halt each time you try large removals? Its a software RAID5 of 5 3TB disks. So its around 12 TB. > Is this server-class hardware with SAS disks or a desktop-class > motherboard with a bunch of large SATA disks? Its a Dell Poweredge T410 with a LSI Logic / Symbios Logic SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon], the drives are: Seagate Barracuda (SATA 3Gb/s) ST3000DM001-9YN166 > I wonder if the underlying hardware could be part of the problem. > I know for sure the current sysadmin is the problem, but figuring out what he (I) is doing wrong is the task at hand. I've got other T410s with this setup as well on ext4 and don't see these problems. daryl -------------- next part -------------- An HTML attachment was scrubbed... URL: From vu at sivell.com Tue Apr 23 18:01:46 2013 From: vu at sivell.com (vu Pham) Date: Tue, 23 Apr 2013 13:01:46 -0500 Subject: [rhelv6-list] connection problem : work fine thru SSH tunnel, but not directly Message-ID: <5176CC8A.4010704@sivell.com> I experience connection problem between a client app to a remote app on this particular RHEL 6.4 server. ( The client work fine with the remote app on *other* RHEL servers ). The connection is made just fine, but when I access the data, the client app quickly hangs up on me after spitting out some data. Turning off iptables does not help, although I should not try that because the connection can be made, and I could get some data. The connections to this server will work just fine if they are made thru local port forwarding via SSH. Initially I wanted to post onto the application forum, but because the connections work thru the SSH tunnel so I changed my mind and posted here. I may be wrong, as I often am. If so, I am sorry. Does anybody experience something similar ? Thanks, Vu From David.Kinzel at encana.com Tue Apr 23 18:07:04 2013 From: David.Kinzel at encana.com (Kinzel, David) Date: Tue, 23 Apr 2013 18:07:04 +0000 Subject: [rhelv6-list] connection problem : work fine thru SSH tunnel, but not directly In-Reply-To: <5176CC8A.4010704@sivell.com> References: <5176CC8A.4010704@sivell.com> Message-ID: >I experience connection problem between a client app to a remote app on >this particular RHEL 6.4 server. ( The client work fine with the remote >app on *other* RHEL servers ). The connection is made just fine, but >when I access the data, the client app quickly hangs up on me after >spitting out some data. Turning off iptables does not help, although I >should not try that because the connection can be made, and I could get >some data. > >The connections to this server will work just fine if they are made thru >local port forwarding via SSH. > >Initially I wanted to post onto the application forum, but because the >connections work thru the SSH tunnel so I changed my mind and posted >here. I may be wrong, as I often am. If so, I am sorry. > >Does anybody experience something similar ? Bad and strange things like this can happen if you have your MTU misconfigured. > >Thanks, >Vu This email communication and any files transmitted with it may contain confidential and or proprietary information and is provided for the use of the intended recipient only. Any review, retransmission or dissemination of this information by anyone other than the intended recipient is prohibited. If you receive this email in error, please contact the sender and delete this communication and any copies immediately. Thank you. http://www.encana.com From vu at sivell.com Wed Apr 24 13:11:10 2013 From: vu at sivell.com (vu Pham) Date: Wed, 24 Apr 2013 08:11:10 -0500 Subject: [rhelv6-list] connection problem : work fine thru SSH tunnel, but not directly In-Reply-To: References: <5176CC8A.4010704@sivell.com> Message-ID: <5177D9EE.1060404@sivell.com> On 04/23/2013 01:07 PM, Kinzel, David wrote: >> I experience connection problem between a client app to a remote app on >> this particular RHEL 6.4 server. ( The client work fine with the remote >> app on *other* RHEL servers ). The connection is made just fine, but >> when I access the data, the client app quickly hangs up on me after >> spitting out some data. Turning off iptables does not help, although I >> should not try that because the connection can be made, and I could get >> some data. >> >> The connections to this server will work just fine if they are made thru >> local port forwarding via SSH. >> >> Initially I wanted to post onto the application forum, but because the >> connections work thru the SSH tunnel so I changed my mind and posted >> here. I may be wrong, as I often am. If so, I am sorry. >> >> Does anybody experience something similar ? > > Bad and strange things like this can happen if you have your MTU misconfigured. David, Thanks for your reply. The MTU on all the servers are 1500. And just this application has problems. I tried some different values for MTU such as 1492 and 500 last night, but it did not help. I also wrote a test application that simulates the real client application's reading data section and the test app can connect to the server and extract data just fine. Of course, simulation of just one tiny part cannot draw the conclusion for the whole thing, but the result makes the puzzle more "puzzled" :) Thanks, Vu From gianluca.cecchi at gmail.com Mon Apr 29 23:19:55 2013 From: gianluca.cecchi at gmail.com (Gianluca Cecchi) Date: Tue, 30 Apr 2013 01:19:55 +0200 Subject: [rhelv6-list] Problem with vlan during kickstart and rhel 6.4 Message-ID: Hello, during a kickstart install of a 6.4 system, my boot line is something like kernel /boot/vmlinuz ks=http://x.y.w.z/ks.cfg vlanid=311 ksdevice=my_mac ip=my_ip netmask=my_nm gateway=my_gw inside the downloaded ks there is a line such as network --bootproto=static --ip=my_ip --netmask=my_nm --gateway=mw_gw --device=eth0 --vlanid=311 I use network install in kickstart with this directive: url --url http://x.y.w.z/rhel6 The installation starts ok, but after about 70-80 packages it gives error about downloading rpm files. Going to F2 console I can see that the command ip addr list 1) at the beginning of installation during first 70 packages' install has no ip associated with eth0 has the specified ip associated with eth0.311 at eth0 2) when it raises error about downloading packages has the ip associated with both eth0 and eth0.311. at eth0 and the server cannot ping the gw As soon as I run at F2 console ip addr del my_ip dev eth0 and select "retry" at the installation screen, it completes correctly. But at boot the problem remains because both eth0 and eth0.311 have the ip in their config file. So that I have to manually remove from ifcfg-eth0 and reload the network service Is this a bug of NetworkManager during install or my fault in specifying vlan parameters? Perhaps I should use inside the kickstart file soething liske: network --bootproto=static --ip=my_ip --netmask=my_nm --gateway=mw_gw --device=eth0.311 --vlanid=311 ? After install NetworkManager is disabled but I can see that it is there during install: is there a way to not have NM during install too but use old plain network service..? Thanks in advance Gianluca From gianluca.cecchi at gmail.com Tue Apr 30 12:20:54 2013 From: gianluca.cecchi at gmail.com (Gianluca Cecchi) Date: Tue, 30 Apr 2013 14:20:54 +0200 Subject: [rhelv6-list] Problem with vlan during kickstart and rhel 6.4 [SOLVED] Message-ID: On Tue, Apr 30, 2013 at 1:19 AM, Gianluca Cecchi wrote: > Hello, > during a kickstart install of a 6.4 system, my boot line is something like > > kernel /boot/vmlinuz ks=http://x.y.w.z/ks.cfg vlanid=311 > ksdevice=my_mac ip=my_ip netmask=my_nm gateway=my_gw > > > inside the downloaded ks there is a line such as > network --bootproto=static --ip=my_ip --netmask=my_nm --gateway=mw_gw > --device=eth0 --vlanid=311 > If I remove the network line inside the kickstart file, all goes well both during installation phase and after it with configuration files. Probably my unnecessary network specification redundancy (in boot line and kickstart file) doesn't make any effect for access lans but creates these kinds of problems with vlans. Gianluca