Meeting Log - 2009-07-23

Ricky Zhou ricky at fedoraproject.org
Thu Jul 23 21:03:10 UTC 2009


20:00 < mmcgrath> #startmeeting
20:00 < zodbot> Meeting started Thu Jul 23 20:00:16 2009 UTC.  The chair is mmcgrath. Information about MeetBot at http://wiki.debian.org/MeetBot.
20:00 < zodbot> Useful Commands: #action #agreed #halp #info #idea #link #topic.
20:00  * ricky 
20:00 < mmcgrath> #topic Infrastructure -- Who's here?
20:00 -!- zodbot changed the topic of #fedora-meeting to: Infrastructure -- Who's here?
20:00  * ricky (oops)
20:00  * dgilmore 
20:00  * dgilmore 
20:00  * dgilmore 
20:00  * LinuxCode 
20:01  * johe here
20:01  * nirik is around. 
20:01 < mmcgrath> K.  lets get started on ticket
20:01 < mmcgrath> s
20:01 < mmcgrath> #topic Infrastructure -- Tickets
20:01 -!- zodbot changed the topic of #fedora-meeting to: Infrastructure -- Tickets
20:01 < mmcgrath> .tiny https://fedorahosted.org/fedora-infrastructure/query?status=new&status=assigned&status=reopened&group=milestone&keywords=~Meeting&order=priority
20:01 < zodbot> mmcgrath: http://tinyurl.com/47e37y
20:01  * davivercillo is here
20:01 < mmcgrath> So the first and only meeting item is, again,
20:01 < mmcgrath> .ticket 1503
20:01 < zodbot> mmcgrath: #1503 (Licensing Guidelines for apps we write) - Fedora Infrastructure - Trac - https://fedorahosted.org/fedora-infrastructure/ticket/1503
20:01 < onekopaka> hello
20:01 < mmcgrath> last time we talked about the the whole metting
20:01  * onekopaka was here.
20:02 < mmcgrath> this time lets kill it after 10 minutes max.
20:02 < onekopaka> is*
20:02 < mmcgrath> abadger1999: around?
20:02 < dgilmore> mmcgrath: i think we could again
20:02 < smooge> here
20:02  * abadger1999 arrives
20:02 < dgilmore> did we get input on what source we have to make available? and how we have to make it available if we went with AGPL everywhere?
20:02 < mmcgrath> abadger1999: so give us the latest poop
20:02 < smooge> dgilmore, spot is working on it with legal
20:02 < LinuxCode> I assume this is the AGPLv3 issue ?
20:02 < smooge> or I should wait until abadger1999
20:03  * skvidal is here
20:03 < LinuxCode> wasnt somebody supposed to be here ?
20:03 -!- mchua [n=mchua at nat/redhat/x-fd7206a424906029] has joined #fedora-meeting
20:03 < ricky> LinuxCode: They were at OSCON
20:03 < LinuxCode> ohh
20:03 < abadger1999> spot is talking to legal.  So I think we don't have much to say here.
20:03 < LinuxCode> the person spot mentioned ?
20:03 < abadger1999> Unless people have new questions since last year
20:03 < mmcgrath> abadger1999: so no progress since last week?
20:03  * LinuxCode hasnt, but sees both sides of the argument
20:04 < dgilmore> abadger1999: right. until we clear up the legal requirements we can do anything
20:04 < abadger1999> mmcgrath: Well, spot has the list of questions now and it has gone from him to legal.  But we haven't gotten a writeup yet.
20:04 < abadger1999> So we can make no progress.
20:04 < mmcgrath> k
20:04 < mmcgrath> Is Bradley Kuhn here?
20:04  * mmcgrath notes domsch invited him
20:05 < ricky> I think he was OSCONing, so mdomsch moved it up a week
20:05 < mmcgrath> ah, k
20:05 < abadger1999> I'd like to go ahead with relicensing python-fedora to lgplv2+ since that won't be affected by whatever we decide regarding agpl.
20:05 < mmcgrath> ah.  so he did :)
20:05 < mmcgrath> abadger1999: What all are we accomplishing with that?
20:06 < abadger1999> mmcgrath: Right now it's gplv2.  LGPLv2+ will make it so more people can use it.
20:06 < abadger1999> for instance, if someone writes code with the apache license, python-fedora will work under the LGPLv2+.
20:06 < abadger1999> also, mdomsch would like it to change.
20:07 < mmcgrath> k
20:07 < mmcgrath> well, if thats all we have on that I'll move on
20:07 < abadger1999> mirrormanager is MIT licensed.  But when used with python-fedora, the combined work is GPLv2.
20:07 < skvidal> together - they fight crime!
20:07 < abadger1999> with python-fedora LGPLv2+, mirrormanager remains MIT in practice.
20:07 < onekopaka> hmm.
20:08 < abadger1999> s/inpractice//
20:08 < mmcgrath> skvidal: that's good, I'm pretty sure smolt is committing some....
20:08 < mmcgrath> abadger1999: ok, thanks for that
20:08 < mmcgrath> anyone have anything else on this topic before we move on?
20:08  * LinuxCode shakes head
20:08 < mmcgrath> k
20:09 < mmcgrath> #topic Infrastructure -- Mirrors and 7 years bad luck.
20:09 -!- zodbot changed the topic of #fedora-meeting to: Infrastructure -- Mirrors and 7 years bad luck.
20:09 < LinuxCode> haha
20:09 < smooge> you broke it
20:09 < LinuxCode> its your fault
20:09 < LinuxCode> lol
20:09 < mmcgrath> So, as far as I know the mirrorsr are on the mend.
20:09 < LinuxCode> +1 can confirm that
20:09 < smooge> I believe so.
20:09 -!- Sonar_Gal [n=Andrea at fedora/SonarGal] has quit "Leaving"
20:09 < LinuxCode> had first updates today
20:10 < mmcgrath> jwb just did a push that finished.
20:10 < mmcgrath> so there's a bash update coming out.
20:10 < mmcgrath> we're trying to time how long it takes that update takes to get to our workstations
20:10 < jwb> mmcgrath, i see it on d.f.r.c, but i haven't gotten it via yum yet
20:10 < mmcgrath> so if any of you see a bash update available, please do ping me and let me know.
20:10 < jwb> (and i'm doing yum clean metadata/yum update every little bit)
20:10  * mmcgrath verifies he also doesn't have it
20:11  * LinuxCode cleans up and sees if it has made it to the UK
20:11 < mmcgrath> yeah, no update yet.
20:11 < mmcgrath> So keep an eye on that.
20:11 < mmcgrath> Here's some of the stuff that's been done
20:11 < mmcgrath> 1) we've put limits on the primary mirrors
20:11 < mmcgrath> 2) we've started building our own public mirrors system which, for now, will be very similar to the old mirrors system.
20:11 < mmcgrath> but we control it
20:12 < mmcgrath> 3) we've leaned harder on various groups that we're blockign on to get our i2 mirror back up and our other primary mirror back up
20:12 -!- davivercillo [n=daviverc at 146.164.31.95] has quit Nick collision from services.
20:12 < mmcgrath> we're supposed to have 3 of them.
20:12 < mmcgrath> But still no root cause, though it sounds like a combination of thigns
20:12 < mmcgrath> err things.
20:13 < mmcgrath> To me the biggest issue isn't that the problem came up, it's that it took so long to fix and our hands were largely tied for it.
20:13 -!- davivercillo [n=daviverc at 146.164.31.95] has joined #fedora-meeting
20:13  * davivercillo came back...
20:13 < mmcgrath> So we're working hard to build our own mirrors out that we can work on, monitor, etc.
20:13 < Southern_Gentlem> su
20:13 < Southern_Gentlem> su
20:13 < ricky> Password:
20:13 < LinuxCode> yeh, was a bummer that it took that long
20:13 < dgilmore> mmcgrath: how is that going to work?
20:13 < nirik> Southern_Gentlem: sudio? :)
20:14 < Southern_Gentlem> wrong window sorry
20:14 < ricky> dgilmore: It'll just be rsync servers that mount the netapp
20:14 < mmcgrath> dgilmore: for now we've got sync1 and sync2 up (which are RR behind sync.fedoraproject.org) which we're going to dedicate to our tier0 and tier1 mirrors.
20:14 < mmcgrath> They mount the netapp and basically do the same thing download.fedora.redhat.com did
20:14 < mmcgrath> Long term though...
20:14 < ricky> Have we decided to dedicate it, or just have connection slots reserved for tier 0/1?
20:14 < dgilmore> mmcgrath: ok what about from other data centres?
20:14 < dgilmore> RDU and TPA?
20:15 < ricky> rsync's connection limiting allows us to be pretty flexible with how we do that
20:15 < mmcgrath> ricky: right now we're not going to tell others about it and we might explicitly deny access the non tier0/1 mirrors
20:15 < smooge> TPA?
20:15 < LinuxCode> mmcgrath, so, the other mirrors grab from sync1 and sync2 ?
20:15 < mmcgrath> notting: FYI this might interest you.
20:15 < ricky> OK
20:15 < ricky> smooge: Tampa, I think
20:15 < mmcgrath> LinuxCode: only tier0 and 1
20:15 < LinuxCode> k
20:15 < smooge> oh I thought Tampa was gone
20:15 < mmcgrath> dgilmore: so the future of that is going to look like this.
20:15 < ricky> The other mirrors should technically grab from tier 0 or 1
20:15 < mmcgrath> TPA's mirror has been offline since February but it is physically in PHX2 now just not completely hooked up.
20:15 < smooge> ah ok
20:15 < mmcgrath> They're going to get it setup, get the snapmirror working again, then we'll have some servers there that mount that netapp and share.
20:16 < mmcgrath> it'll be similar if not identical to what we have in PHX1.
20:16 < mmcgrath> for me the concern is 1) is the limiting factor bandwidth or disk space.
20:16 < mmcgrath> and if it's bandwidth, we might need additional servers in PHX2 which I understand has a much faster pipe.
20:16 < mmcgrath> That's all regular internet stuff.
20:16 < LinuxCode> mmcgrath, what about failure ?
20:16 < mmcgrath> on the I2 side we're going to get RDU setup
20:16 < ricky> And will we get access to the rsync servers on the non-PHX sites?
20:17 < dgilmore> mmcgrath: so the same thing in RDU?
20:17 < mmcgrath> LinuxCode: well we'll have one in PHX and one in PHX2 so we'll be redundant in that fashion.
20:17 < LinuxCode> k
20:17 < mmcgrath> dgilmore: similar in RDU, though probably not a whole farm of servers.
20:17 < dgilmore> couple of boxes in front of teh netapp?
20:17 < mmcgrath> we'll have proper I2 access there.
20:17  * SmootherFrOgZ is around btw
20:17 < mmcgrath> but one thing I'm trying to focus on there is using ibiblio as another primary mirror.
20:17 -!- mdomsch [n=Matt_Dom at 24.174.1.212] has joined #fedora-meeting
20:17 < mmcgrath> Or at least work it in to our SOP so it can be pushed to very quickly and easily instead of pulled from.
20:17 < mmcgrath> we see one sign of problems from our primary mirrors and that can be setup and going.
20:17 < mmcgrath> we were lucky this last week.
20:18 < mmcgrath> no ssh vulnerabilities were actually real for example :)
20:18 < mmcgrath> so that's really what it's all going to look like.
20:18 -!- tibbs [n=tibbs at fedora/tibbs] has quit "Konversation terminated!"
20:18 < mmcgrath> smooge has some concerns about IOPS on the disk trays.
20:19 < mmcgrath> and we may have to take a more active role in determining what kind of trays we want in the future.
20:19 < dgilmore> mmcgrath: cool
20:19 < mmcgrath> this one was done between the storage team and netapp and months of their research.
20:19 < dgilmore> mmcgrath: its all sata right?
20:19 < smooge> yes.. the trays and how they are 'set' up were based on if they were FC.
20:19 < dgilmore> how big are the disks?
20:19 < smooge> and now they are 1TB SATAs
20:20 < smooge> the issue is that the SATAs perform at 1/3 the rate FC would
20:20 < dgilmore> so we have one shelf in each location?
20:20 < mmcgrath> smooge: I'd have hopped that months of research would have shown that though.
20:20 < smooge> but the FC would cost 8x more
20:20 < mmcgrath> I think their thoughts were that our FC rates were very underutilized.
20:20 < dgilmore> smooge: right id expect that kind of decrease in performance
20:21 < mmcgrath> So the longer term future on all of this is still in question.
20:21 < LinuxCode> mmcgrath, for the cost, you could make more mirrors
20:21 < smooge> mmcgrath, it could have been but more like 3/5's of capacity
20:21 < mmcgrath> and I'm pretty sure our problems, caused kernel.org's problems last week as well.
20:21 < LinuxCode> and maybe raid6+0 them
20:21 < mmcgrath> and his' machines are f'ing crazy fast.
20:21 < LinuxCode> ehh raid5+0
20:21 < LinuxCode> raid6 be slow
20:21 < smooge> LinuxCode, the issue comes down to the number of spindles either way
20:22 < LinuxCode> hmm
20:22 < smooge> and the bandwidth of the controllers
20:22 < mmcgrath> LinuxCode: raid6 and raid5 with lots of disks have nearly identical read performance.
20:22 < LinuxCode> true that mmcgrath
20:22 < mmcgrath> But still, no ETA on any of that.
20:22 < LinuxCode> even with stripping applied too ?
20:22 < smooge> anyway.. it is what it is or whats done is done or some other saying
20:22 < LinuxCode> smooge, hehe
20:22 < mmcgrath> I have a meeting with Eric (my primary RH contact) to find out about funding for new servers and what not for all of this.
20:23 < mmcgrath> and the scary part is we had these issues with just 1T of storage.
20:23 < mmcgrath> these trays were purchased so we could have closer to 8T of storage to use.
20:23 < LinuxCode> hmmm
20:23 < mmcgrath> If we find the trays can't handle it.... then I don't know what's going to happen but I know the storage team won't be happy.
20:23 < mmcgrath> So anyone have any additional questions on any of this?
20:23 < LinuxCode> are these san trays ?
20:24 < smooge> netapp
20:24 < LinuxCode> k
20:24 < mmcgrath> K, so that's that.
20:25 < mmcgrath> #topic Infrastructure -- Oddities and messes
20:25 -!- zodbot changed the topic of #fedora-meeting to: Infrastructure -- Oddities and messes
20:25 -!- sdziallas_ [n=sebastia at p5B045F68.dip.t-dialin.net] has joined #fedora-meeting
20:25 < mmcgrath> So have things seemed more fluxy then normal to anyone else or is it just me?
20:25 < mmcgrath> We've largely corrected the ProxyPass vs RewriteRule [P]
20:25 < mmcgrath> thing
20:25 -!- sdziallas [n=sebastia at fedora/sdziallas] has quit Nick collision from services.
20:25 -!- sdziallas_ is now known as sdziallas
20:25 < mmcgrath> but I still feel there's lots of little outstanding bugs that have crept in over the last several weeks that we're still figuring out.
20:25 < mmcgrath> of particular concern to me at the moment is smolt.
20:26 < mmcgrath> but there were other things like the openvpn issue ricky discovered yesterday.
20:26 < dgilmore> it seems like nagios has been having moments
20:26 < dgilmore> where we get alot of alerts
20:26 < ricky> Are we sure that the smolt changes were necessarily from the merge?
20:26 < ricky> smolt was one of the ones whose proxy config was complex enough that I didn't touch it much
20:26 -!- Pikachu_2014 [n=Pikachu_ at 85-169-128-251.rev.numericable.fr] has joined #fedora-meeting
20:27 < mmcgrath> ricky: I actually think the smolt issues were discovered not because of the change but because of a cache change you made.
20:27 < ricky> Ahhh, yeah.
20:27 < mmcgrath> I think nagios had been checking a cached page the whole time so even when smolt went down, nagios just didn't notice.
20:27 < LinuxCode> hehe
20:27 < mmcgrath> or at least didn't notice it unless things went horribly bad.
20:27 < mmcgrath> I'd like to have more people looking at it though
20:27 -!- sijis [n=sijis at adsl-75-49-223-86.dsl.emhril.sbcglobal.net] has joined #fedora-meeting
20:28 < mmcgrath> onekopaka has been doing some basic hits from the outside.
20:28 < mmcgrath> basically a "time smoltSendProfile -a"
20:28 < onekopaka> mmcgrath: I have.
20:28 < mmcgrath> and the times were all over the place.
20:28  * sijis is sorry for being late.
20:28 < mmcgrath> including about a 5% failure rate.
20:28 < mmcgrath> Of course I hate to be spending time on something that is clearly not in Fedora's critical path, but we've got to knock it out
20:28 -!- thekad [n=kad at 189.187.137.181] has joined #fedora-meeting
20:28 < LinuxCode> does smolt provide some debugging output thats useful ?
20:29 < mmcgrath> LinuxCode: it's almost entirely blocking on db.
20:29 < LinuxCode> as to network, dns issues
20:29 < mmcgrath> we even have the queries.
20:29 < LinuxCode> hmm
20:29 < LinuxCode> weird
20:29 < davivercillo> mmcgrath: I can try to help you with this ...
20:29 < ricky> Do we know which queries are causing the locked queries though?
20:30 < mmcgrath> ricky: not really
20:30 < mmcgrath> I still don't even understand why they're being locked
20:30 < mmcgrath> and why does locktime not mean anything?
20:30 < LinuxCode> is there a conn limit set up on the db end for the smolt unit ?
20:30 < ricky> locktime?
20:30 < mmcgrath> LinuxCode: it's not that weird, it's got 80 million rows :)
20:30 < mmcgrath> ricky: yeah in the slow queries log
20:30 < ricky> The time column on processlist is the that the query has been in its current state
20:30 < abadger1999> Do we have any reproducers?  I can try with postgres but we'd need to know whether we've gained anyhting or not.
20:30 < ricky> Hm, I remember looking the slow queries one up
20:31 < mmcgrath> davivercillo: how's your db experience?
20:32 < LinuxCode> mmcgrath, so queries get processed or a connection passes to the db server, but it doesnt handle it, correct ?
20:32 < davivercillo> mmcgrath: not so much yet... but I can learn fast ! :D
20:32 < mmcgrath> LinuxCode: the queries take several seconds to complete
20:32 < mmcgrath> for example
20:32 -!- mcepl [n=mcepl at 49-117-207-85.strcechy.adsl-llu.static.bluetone.cz] has left #fedora-meeting []
20:32 < LinuxCode> hmmm
20:33 < mmcgrath> I don't even have an example at the moment.
20:33 < LinuxCode> np
20:33 < ricky> Ah, lock_time is the time the query spent waiting for a lock
20:33 < mmcgrath> but they're there.
20:33 < ricky> So for the queries in the lock state with high times in processlist, they should have high lock_time if they're in the slow query log
20:33 < mmcgrath> ricky: so if a query is running on a table for 326 seconds... does that mean it was locked that whole time?
20:33 < ricky> Depends on where the 326 number came from
20:34 < mmcgrath> ricky: in the slow queries log, do you see any queries that have a Lock_time above 0?
20:34 < mmcgrath> oh, there actually are some.
20:35 < mmcgrath> only 56 of 2856 though
20:35 < mmcgrath> So anyway
20:35 < mmcgrath> davivercillo: how's your python?
20:35 < LinuxCode> could it be that smolt sends some weird query, that then causes it to hickup ?
20:35 < mmcgrath> LinuxCode: nope, it's not weird queries :)
20:35 < LinuxCode> just a wild though
20:35 < LinuxCode> t
20:35 < davivercillo> mmcgrath: I think that is nice...
20:35 < mmcgrath> it's just the size of the db
20:35 < LinuxCode> hehe
20:36 < onekopaka> joins + size = slowness
20:36 < mmcgrath> well and that's something else we need to figure out, we've spent so much time optimizing render-stats (which is still pretty killer)
20:36 < LinuxCode> mmcgrath, yeh but if you do something funky + huge db = inefficient
20:36 < mmcgrath> but we haven't looket at optimizing the sending profiles.
20:36 < davivercillo> mmcgrath: I did that script checkMirror.py, do u remember ?
20:36 < mmcgrath> huge db == inefficient :)
20:36 < davivercillo> :P
20:36 < LinuxCode> mmcgrath, haha of course
20:36 < mmcgrath> davivercillo: yeah but that was smaller :)
20:36 < LinuxCode> but there is no way around that
20:36 < mmcgrath> davivercillo: ping me after the meeting, we'll go over some stuff.
20:37 < davivercillo> mmcgrath: yep, I know... :P
20:37 < mmcgrath> if any of you are curious and want to poke around
20:37 < davivercillo> mmcgrath: Ok !
20:37 < mmcgrath> you can get a sample db to download and import here:
20:37 < mmcgrath> https://fedorahosted.org/releases/s/m/smolt/smolt.gz
20:37 < mmcgrath> It's about 500M
20:37 < thekad> mmcgrath, yes! thanks!
20:37  * thekad has been waiting to load something like that
20:37 < mmcgrath> Ok, I don't want to take up the rest of th emeeting with smolt stuff so we'll move on.
20:38 < mmcgrath> #topic Infrastructure -- Open Floor
20:38 -!- zodbot changed the topic of #fedora-meeting to: Infrastructure -- Open Floor
20:38 < mmcgrath> Anyone have anything they'd like to discuss?
20:39 < dgilmore> importing meat pies from australia?
20:39  * mdomsch invited Bradley Kuhn to a future meeting to talk about agplv3
20:39 < thekad> mmcgrath, actually, about this smolt stuff, is there a ticket where we can track?
20:39 < mdomsch> we may have it cleared up by then, maybe not.
20:39 < SmootherFrOgZ> dgilmore:  :)
20:39 < dgilmore> mdomsch: have at it
20:39 < mmcgrath> thekad: not actually sure.  I'll create one if not.
20:39 < LinuxCode> mmcgrath, Id just like to know when you guys have time to help me do that new mapping of infra
20:39 < LinuxCode> it will probably take a few weeks, if not longer
20:39 < mmcgrath> mdomsch: yeah we were talking about it a bit earlier.  I saw your first email but not th esecond email :)
20:40 < smooge> dgilmore, are they mutton meat pies?
20:40 < thekad> I've seen this topic pop up several times, but we start from scratch every time, I think we could benefit there :)
20:40 < dgilmore> smooge: no
20:40 < LinuxCode> if that ticket still even exists
20:40 < smooge> dgilmore, then no thankyou
20:40 < dgilmore> smooge: four'n'twenty pies
20:40 < dgilmore> smooge: best ones ever
20:40 -!- jayhex [n=jayhex at 122.53.116.156] has joined #fedora-meeting
20:40 < mmcgrath> Ok, anyone have anything else they'd like to discuss?
20:41  * thekad is being dragged away by his 2yo daughter bi5
20:41 < smooge> dgilmore, as long as they don't have raisins and such in them
20:41 < LinuxCode> mmcgrath, see above
20:41 < LinuxCode> to replace this
20:41 < LinuxCode> https://fedoraproject.org/wiki/Infrastructure/Architecture
20:41 < LinuxCode> was in the talk some time ago
20:41 < mmcgrath> LinuxCode: yeah you were going to add docs to git.fedorapeople.org
20:41 < LinuxCode> there was a ticket, but not sure what happened to it
20:41 < mmcgrath> err git.fedorahosted.org/git/fedora-infrastructure.git :)
20:41 < LinuxCode> k
20:42 < LinuxCode> well I will have time now, but need you guys to explain to me exactly whats where
20:42 < mmcgrath> .ticket 1084
20:42 < zodbot> mmcgrath: #1084 (Fix proxy -> app docs) - Fedora Infrastructure - Trac - https://fedorahosted.org/fedora-infrastructure/ticket/1084
20:42 < LinuxCode> so I just ask some stupid questions now and then
20:42 < mmcgrath> LinuxCode: Do you have some time to work on it this afternoon?
20:42 < LinuxCode> its kinda late now
20:42 < LinuxCode> ;-p
20:42 < LinuxCode> 21:42
20:43 < mmcgrath> LinuxCode: yeah I'll add some stuff.
20:43 < LinuxCode> k
20:43 < mmcgrath> for those docs I think it's less important on where stuff physically is, and more important on how the pieces fit together.
20:43 < LinuxCode> a list be ok
20:43 < LinuxCode> that give me a starting point
20:43 < mmcgrath> that's really what people are talking about when they do architecture
20:43 < LinuxCode> yah of course
20:43 < LinuxCode> to give people a better idea
20:43 < mmcgrath> LinuxCode: <nod>  i'll update that ticket shortly actually
20:43 < LinuxCode> excellent
20:43 < smooge> I have an open floor question
20:43 < mmcgrath> I think starting on the Proxy servers first would be a good way to go.
20:43 < mmcgrath> smooge: have at it
20:44 < LinuxCode> def
20:44 < LinuxCode> we talk another time
20:44 < smooge> Someone was working on a inventory system earlier. Does anyone remember who it was , where it was, etc?
20:44 < smooge> I can't find any reference versus IRC :)
20:44 < LinuxCode> inventory....
20:44 < LinuxCode> kinda rings a bell....
20:44  * nirik thinks it was ocsinventory. Not sure who was doing it tho. 
20:45 < mmcgrath> smooge: I think it was boodle
20:45 < sijis> i saw something on the list about ipplan. is that it?
20:45 < mmcgrath> .any boodle
20:45 < zodbot> mmcgrath: I have not seen boodle.
20:45 < smooge> boodle is a tool?
20:45 < smooge> boodle is a person?
20:45 < mmcgrath> mdomsch: you work with boodle right?
20:45 -!- sharkcz [n=dan at plz1-v-4-17.static.adsl.vol.cz] has quit "Ukončuji"
20:45 < mmcgrath> boodle is a dude(le)
20:45 < ricky> Heh
20:45 < LinuxCode> http://publictest10.fedoraproject.org/ocsreports/
20:45 < LinuxCode> thats in my ticket
20:45 < LinuxCode> not sure if that helps
20:45 < mdomsch> mmcgrath, yes
20:45 < LinuxCode> the machine aint up
20:45 < smooge> LinuxCode, what ticket
20:46 < mmcgrath> mdomsch: he was working on the inventory stuff
20:46 < LinuxCode> https://fedorahosted.org/fedora-infrastructure/ticket/1084
20:46 < LinuxCode> scroll to bottom
20:46 < LinuxCode> 03/16/09 20:36:44 changed by boodle
20:46 < mdomsch> mmcgrath, I remember; I haven't seen anything on that in a bit
20:46 < mdomsch> ha
20:46 < smooge> LinuxCode, thanks.. my browser skills FAILED
20:46 < mdomsch> yeah, since about then
20:46 < LinuxCode> smooge, haha
20:46 < mmcgrath> mdomsch: I just didn't know if he was still working on it or what
20:46 < LinuxCode> ;-D
20:46 < mmcgrath> butI think smooge has an itch to get it going.
20:46 < mmcgrath> and it's probably best to let him scratch it :)
20:46 < mdomsch> smooge, go for it
20:47 < mdomsch> just put a note in that ticket so he knows
20:47 < LinuxCode> that be something useful to me actually
20:47 < thekad> bump the ticket
20:47 < smooge> ok cool. mdomsch can you send me an email address so I can contact him too
20:47 < LinuxCode> to make those updated diagrams
20:47 < ricky> smooge: What was the software you had experience with again?
20:47 < smooge> exactly what he was using
20:47 < mmcgrath> I swear there was an inventory ticket he was working on
20:47 < ricky> Oh
20:47 < ricky> oscinventory?  That might have ended...  a bit poorly
20:47 < smooge> mmcgrath, probably I have epic fail this week with searching
20:48 < ricky> I remember one of the ones he was trying, I found bad security problems on a quick lookover
20:48 < LinuxCode> ricky, with the app ?
20:48 < mmcgrath> ricky: do you know what happened with pb10?
20:48 < ricky> Yeah, grepping my IRC logs now
20:48 < mmcgrath> err pt10
20:48  * mdomsch has to run; later
20:48 < mmcgrath> mdomsch: laterz
20:48 -!- mdomsch [n=Matt_Dom at 24.174.1.212] has quit "Leaving"
20:48 < LinuxCode> http://publictest10.fedoraproject.org/glpi/
20:48 < LinuxCode> there is that one too
20:48 < LinuxCode> also kinda rings a bell
20:48 < smooge> yeah.. they tie into one another
20:48 < ricky> I have no idea, it might have just not gotten started on reboot
20:49 < LinuxCode> smooge, kk
20:49 < smooge> ocsng is the tool that polls the boxes
20:49 < smooge> glpi is the perty front end where you can enter data
20:49 < mmcgrath> .ticket 1171
20:49 < zodbot> mmcgrath: #1171 (Requesting a public test box to evaluate GLPI and OCS) - Fedora Infrastructure - Trac - https://fedorahosted.org/fedora-infrastructure/ticket/1171
20:49 < mmcgrath> smooge: see that ticket as well
20:49 < thekad> mmcgrath, that's the one
20:49 < ricky> Yeah, OSC was the security hole one
20:50 < smooge> geez I really failed
20:50 < ricky> Like I was able to delete a row from some table without logging on or anything
20:50 < thekad> .ticket 1084
20:50 < zodbot> thekad: #1084 (Fix proxy -> app docs) - Fedora Infrastructure - Trac - https://fedorahosted.org/fedora-infrastructure/ticket/1084
20:50 < smooge> I went looking for GLPI
20:50 < thekad> that's the next one
20:50 < ricky> I didn't look much closer at the security stuff after an initial look at it though.
20:50 < LinuxCode> ricky, did you report that ?
20:50 < nirik> ricky: nasty. ;( It's in fedora, you might note that to the maintainer.
20:50 -!- easter_egg [n=freeman at 201-75-42-40-ma.cpe.vivax.com.br] has joined #fedora-meeting
20:51 < mmcgrath> well
20:51 < mmcgrath> just the same
20:51 < mmcgrath> smooge: you want to open up an "inventory management" ticket?
20:51 < ricky> mmcgrath: Looks like publictest10 just didn't get started on a reboot - should I start it again?
20:51 -!- kolesovdv [n=kolesovd at 82.162.141.18] has quit Remote closed the connection
20:51 < smooge> mmcgrath, put that down as an action please
20:51 < mmcgrath> ricky: sure, smooge might be able to use it
20:51 < smooge> I will start on it right away
20:51 < mmcgrath> #action Smooge will create a ticket and get started on inventory management
20:52 < smooge> ricky, we will see if the updated version has the bug and then work it out
20:52  * davivercillo need to go home now ! See you later !
20:52 < ricky> OK.  I just remember getting a really bad impression from that and the other code, but hopefully some of this is fixed.
20:52 < davivercillo> Good Night !
20:52 < mmcgrath> davivercillo: ping me when you get time later
20:52 < mmcgrath> or tomorrow :)
20:52 < mmcgrath> or whenever
20:53 < davivercillo> mmcgrath: for sure
20:53 < davivercillo> bye
20:53 -!- davivercillo [n=daviverc at 146.164.31.95] has left #fedora-meeting []
20:53 < mmcgrath> So we've only got 7 minutes left, anyone have anything else to discuss?
20:54  * ricky wonders if sijis wanted to say anything about blogs
20:54 < mmcgrath> sijis: anything?
20:54 < mmcgrath> abadger1999: or anything about zikula?
20:54 < sijis> yeah, as you saw, the authentication part on the blogs is working.
20:54 < ricky> Thanks for working on that plugin
20:54 < abadger1999> mmcgrath: When should we get docs people started in staging?
20:55 < sijis> we are able to also verify that minimum gropu memberships are met before allowing a login
20:55 < abadger1999> I think they have all of the packages in review.
20:55 -!- kolesovdv [n=kolesovd at 82.162.141.18] has joined #fedora-meeting
20:55 < abadger1999> But htey're not all reviewed yet/some are blocking on licensing.
20:55 -!- danielbruno [n=danielbr at thor.argo.com.br] has quit Remote closed the connection
20:55 < thekad> sijis, which groups are those? cla_done?
20:55 < sijis> a person has to be in cla_done an another other non-cla group
20:55 < ricky> Is http://publictest15.fedoraproject.org/cms/ really as far as they're going to take the test instance?  Not trying to complain, but I'm just used to seeing slightly more complete setups in testing first
20:55 < mmcgrath> abadger1999: how long till the licensing is resolved do you think?
20:56 < sijis> there are few minor things to work out.. but it should be ready to be tested.
20:57 < ke4qqq> ricky - we need to spend more time on pt15 - we largely haven't done anything with it in months. specifically we need to get all of the pieces that we have packaged, and beat on it
20:57 < abadger1999> mmcgrath: I encountered problems in both packages I reviewed.  One has been resolved (I just need to do a final review) the other is waiting upstream.  docs has contacted several people related to that
20:57 < ricky> Ah, cool, so maybe not quite staging-ready yet
20:57 < ke4qqq> ricky: hopefully not far off
20:57 < abadger1999> ianweller also encountered some major problems in one that he reviewed -- but I think it might have been optional.
20:57 < ricky> Cool, thanks
20:58 < abadger1999> ke4qqq and sparks would know for sure.
20:59 < ke4qqq> we still have three (and maybe four) though that includes the one thats waiting abadger1999's final approval, that are blocked on licensing probs
20:59 < mmcgrath> abadger1999: hmm
20:59 < mmcgrath> abadger1999: what are the odds they won't be resolved?
20:59 < abadger1999> ke4qqq: Want to field that?  And any contingency if that happens?
20:59 < ke4qqq> mmcgrath: I think we'll workaround - upstream is pretty committed to fixing stuff
21:00 < ke4qqq> there is just a ton of stuff
21:00 < mmcgrath> <nod>
21:00 < mmcgrath> Ok, so we're at the end of the meeting time, anyone have anything else to discuss?
21:00 < jayhex> just want to say hi before we end. Julius Serrano here.
21:00 < mmcgrath> jayhex: hello Julius!
21:00 -!- oget_ [n=oget at c-69-137-139-64.hsd1.pa.comcast.net] has joined #fedora-meeting
21:00 -!- oget_ is now known as oget_zzz
21:00 < mmcgrath> thanks for saying hey.
21:00 < thekad> welcome jayhex
21:00 < ricky> jayhex: Hey, welcome!
21:01 < sijis> jayhex: welcome.
21:01 -!- danielbruno [n=danielbr at thor.argo.com.br] has joined #fedora-meeting
21:01 < mmcgrath> Ok, if no one has anything else, we'll close in 30
21:01 -!- biertie [n=bert at 174.57-247-81.adsl-dyn.isp.belgacom.be] has joined #fedora-meeting
21:02 < mmcgrath> #endmeeting
21:02 -!- zodbot changed the topic of #fedora-meeting to: Channel is used by various Fedora groups and committees for their regular meetings | Note that meetings often get logged | For questions about using Fedora please ask in #fedora | See http://fedoraproject.org/wiki/Meeting_channel for meeting schedule
21:02 < zodbot> Meeting ended Thu Jul 23 21:02:06 2009 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot .
21:02 < zodbot> Minutes:        http://meetbot.fedoraproject.org/fedora-meeting/2009-07-23/fedora-meeting.2009-07-23-20.00.html
21:02 < zodbot> Minutes (text): http://meetbot.fedoraproject.org/fedora-meeting/2009-07-23/fedora-meeting.2009-07-23-20.00.txt
21:02 < zodbot> Log:            http://meetbot.fedoraproject.org/fedora-meeting/2009-07-23/fedora-meeting.2009-07-23-20.00.log.html
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 197 bytes
Desc: not available
URL: <http://listman.redhat.com/archives/fedora-infrastructure-list/attachments/20090723/7fe9b4e7/attachment.sig>


More information about the Fedora-infrastructure-list mailing list