mozilla :: #rust-infra

19 May 2017
00:28jonhoohey! my twitter rust nightly bot works!
00:28jonhoo(also, nightly builds work correctly again!)
00:35baileynPhrohdoh, did you make sure curl's development packages were installed?
00:35baileynOh he's gone
09:06nagisaso it seems like appveyor is broken?
13:17aturonaidanhs: TimNN: ^
13:18TimNNaturon: Yeah, seems to be still ongoing:
13:26aidanhsalthough I can see this being a "rust project should use its own infrastructure" argument, I for one appreciate that someone else is running around putting out fires rather than us :)
13:53misdreavuscool, i was about to report some appveyor weird breakage on my pr, but i guess it's known
13:54misdreavusthough that incident report sounds different from what happened on mine?
13:54misdreavus"unable to download packages from source" for downloading things from
13:55steveklabniksome other people were reporting intermittent network errors using cargo on windows
13:56TimNNmisdreavus: Actually, I think cargo may be downloading stuff in parallel, so the stuff that's actually failing are git clones (and not the downloads)
13:57misdreavusi'll leave it be then
13:57TimNNmisdreavus: Also, we've seen that error on lots of other builds, so whether or not it is that specific issue, it still seems to be an appveyor network problem
13:57misdreavusi was thinking i'd heard about it in here before, but i wasn't sure
13:57misdreavushadn't checked the list of known spurious failures
14:11acrichtoso actually
14:11acrichtoi don't think our failures have anything to do w/ the appveyor incident
14:11acrichtothey're all failing to get thigns from
14:12acrichtoanother project that's failing
14:12acrichtothis may actually be a "let's encrypt is down" problem
14:12acrichtolike sure we have slow clones, but that's not what's failing
14:17TimNNacrichto: Wow. Looking at that status page, it at least seems like they fixed the issue and builds should start to recover now
14:18carols10centsacrichto: oh btw i added the free librato addon to to help facilitate our own status page someday
14:20carols10centsacrichto: and then i created a public thingie that anyone can see, it doesn't have anything sensitive, just graphs
14:20carols10centsfor the last 60 min only
14:20nagisaletsencrypt does not host anything tho
14:20nagisadid our certs expire at the same time letsencrypt went down?
14:21misdreavusthe letsencrypt status page mentioned ocsp responders
14:24acrichtocarols10cents: I saw that it's looking awesome!
14:24acrichtocarols10cents: if it's cheap feel free to throw on the paid version as well
14:24acrichtonagisa: the error on windows is specifically related to revocation checking
14:25nagisaI get it now
14:25nagisashoddy error message though
14:26carols10centsacrichto: there's a bunch of plans; $20/mo gets us a year of retention rather than 60 min and the dyno metrics charts that are blank. i think that's all we'd need for now? does that count as cheap?
14:26carols10centsplans here:
14:27acrichtocarols10cents: that sounds awesome
14:27acrichtois that a button I need to click?
14:28carols10centslemme see
14:28carols10centsacrichto: lol nope, i can totally spend moz's money
14:28acrichtoyeah our logging currently costs more than our database
14:28acrichtowhich is kinda crazy
14:29carols10centsreload that dashboard and the dyno graphs are now starting to populate and there's more choices for looking back at historical stuff
14:29carols10centscool, it was retaining data even though it wasnt letting us see it is like 70% 302 status codes
14:29acrichto"gee I wonder why alex"
14:29carols10centsso it's got data til 2 days ago when i added it
14:30acrichtolol close enough
14:32nagisaacrichto: use http2, redirects almost no overhead
14:45simJust FYI I (simulacrum) am having internet troubles so while I will attend meeting today I probably won't be pingable for most of the day
14:45simFeel free to email me though
15:59larsbergIs broken on Windows? According to jdm, it appears that all our appveyor and windows buildbots are getting errors downloading packages. e.g,
16:00larsberg"error: unable to get packages from source"
16:01carols10centslarsberg: acrichto thinks it's revocation checking of letsencrypt that's the problem?
16:01acrichtolarsberg: carols10cents:
16:01acrichtolet's encrypt i think is having difficulties
16:02acrichtoand looks like schannel isn't dealing well with it
16:02acrichtoyou can fix it via cargo
16:02acrichto.cargo/cofnig: [http] check-revoke = false
16:02larsbergacrichto: carols10cents awesome, thanks!
16:02carols10centslooks like they're still having problems
16:02larsbergWell, not awesome, but glad to know it's not us :-)
16:05carols10centswooo high fives all around, we didn't fuck up this time yay!
16:07larsbergplot twist: letsencrypt bug is hiding something I *did* screw up
16:07larsbergfeels like one of those fridays where I roll out barely-tested infra changes!
16:07larsberg"let's enable a few more builders and split some jobs, what could possibly go wrong?"
20:45simulacrum_acrichto: ping
20:45acrichtosimulacrum_: pon
20:45simulacrum_Do you want to discuss the cargo thing I found here or in #cargo?
20:45simulacrum_Either works for me
20:45acrichtoeh here's fine
20:46acrichtoso you can reliably repro this?
20:46acrichtocan you generate the full error with -v ?
20:47simulacrum_I can reliably repro, will work on -v
20:47simulacrum_-v is slightly hard because I don't actually know what's calling cargo
20:47simulacrum_I just know it happens somewhere in (the python part)
20:47simulacrum_During submodule updates
20:47acrichtooh weird
20:48acrichtothat's probably in the script itself
20:48acrichtolemme get that location
20:48simulacrum_And possibly only on linux, not sure. My connection to my build server has been flaky, I can reproduce reliably there
20:48simulacrum_But can't locally on os x
20:49acrichtocan you add -v there?
20:49acrichtojust to get the full error?
20:49simulacrum_Yeah, let me try
20:55simulacrum_Trying to reproduce in docker since my build server (where this was reliable) is down right now...
20:57acrichtooh no :(
20:59simulacrum_I'll ping you when either a) I get access to it or b) I can reproduce :)
20:59simulacrum_acrichto: Any idea where cargo stores things when in docker? And/or where rustbuild even gets cargo from?
21:00acrichtosimulacrum_: in theory $HOME/.cargo
21:00acrichtowhich may end up being /root/.cargo
21:00simulacrum_ah, found it
21:00simulacrum_Okay, so it looks like it's working in docker
21:00simulacrum_I suspect it might've been a cargo problem that's been fixed?
21:02simulacrum_acrichto: Is there something recent that's gone into cargo that changed submodule handling? I don't recall anything myself
21:03acrichtosimulacrum_: shouldn't be afaik
21:03acrichtonot related to git repos
21:04simulacrum_I wonder if it was environment specific somehow; but I can't think of what could be different
21:04simulacrum_I guess I'll wait until I have access to my build server which should be tonight
21:06simulacrum_acrichto: I'll get back to you I guess; seems to not reproduce anywhere but the build server (which may have been fixed by now too, no idea if it was version specific, but shouldn't have been I think, since the cargo rustbuild uses is presumably always beta cargo)
21:07acrichtook no worries
21:07simulacrum_Is that correct that rustbuild always uses the same cargo?
21:07acrichtoshoudl be yeah
21:09simulacrum_The only thing I can think of is that while Cargo stayed the same the submodule had never changed before
21:31carols10centswhere is dear leader?
21:31aturonhi hi hi
21:31aturonsorry was just posting
21:31aturonoh i need to wave!
21:32aturonfrewsxcv: yt?
21:32aturonalright let's get started!
21:32aturonfirst topic: nightlies!
21:33aturonwe had a brief outage, which is Not Supposed to be Possible
21:33aturonthe fix has landed afaik?
21:33acrichtothis breakage falls into the "wlel but we're not perfect category"
21:33aturontomprince: ohai
21:33acrichtonightlies outage was b/c... well it's fixed now
21:33acrichtodunno how to prevent this in the future though
21:34aturonacrichto: can you spell out a bit what happened?
21:34aturonjust for all of our education
21:34acrichtosure yeah, it was specifically around submodule mangement
21:34acrichtorustbuild itself checks out submodules
21:34acrichtobut on CI that's not what we do
21:34acrichtoon CI a script manages submodules and then the build happens in a container where rustbuild takes over
21:34acrichtoso it's possible to check in a change that breaks rustbuild submodule management
21:34acrichtowhich means `./configure + make` is not guaranteed to work
21:35acrichtothe change here was that a Cargo.toml that was a member of the rustbuild workspace was a member of a submodule
21:35acrichtoso for cargo to build rustbuild we needed to update submodules
21:35acrichtoto update submodules though we needed rustbuild
21:35acrichtoa fix has now been implemented to move submodule management to the python script
21:35acrichtoso this hopefully won't happen again
21:35aturon"the python script" = ?
21:35acrichtooh ""
21:35aturonok i figured
21:36simul1Can we have a bot that just checks submodule support in rustbuild w/o building rustc?
21:36acrichtowe could yeah
21:36acrichtoit's just whack-a-mole though
21:36acrichtoI doubt we'd ever regress the bot ever again
21:36aturonwell ok so to be clear
21:36aturonbasically all of our problems producing artifacts tend to come down to "somethign was different from where we tested"
21:36aturonthe move to our current travis+appveyor system was meant to alleviate that
21:37aturonbecause nightlies are literally produced as a byproduct of our normal CI
21:37aturonbut it turns out there was still some divergence, around submodules
21:37aturonand it sounds like, by moving this into the script, there's no possibility of such divergence?
21:37acrichtoso currently it's impossible to have 0 divergence
21:37acrichtob/c we're literally running in a different env
21:37acrichtoe.g. in rust-central-station instead of travis
21:37acrichtoso given that we have *some* divergence the goal is to minimize it as much as possible
21:38acrichtowell no hang on
21:38acrichtolet me rephrase
21:38acrichto14:37 <~aturon> because nightlies are literally produced as a byproduct of our normal CI
21:38aidanhsand users can have pretty bizarre environments (I have rust itself as a submodule in some places and have had to be vigilant about PRs that break with that environment)
21:38acrichtothat&#39;s not 100% correct
21:38acrichto99% of a nightly is produced by CI
21:38acrichtobut the final piece, manifests, are produced later
21:38aturonah right, there&#39;s a bot that copies
21:38acrichtoso we don&#39;t even test that piece on CI
21:38acrichtoso any breakage could happen there
21:39acrichtoso in a sense we are just not engineered at all to have 0 difference here
21:39acrichtobut rather we&#39;re engineered for as small a difference as possible
21:39acrichtowhere the delta is &quot;check out the repo and run a command&quot;
21:39aturonyep, fair enough
21:39acrichtothat step isn&#39;t tested on CI at all
21:39aturonit&#39;s far far closer than it used to be :)
21:39acrichtowe could in theory I guess
21:39acrichtojust keep reproducing an old release
21:39acrichtothat... would be a good idea probably
21:39acrichtoa bot that emulates exactly what rust-central-station does
21:39acrichtomaybe literally checking out rust-central-station
21:40acrichtoyes it&#39;s possible to have more gating here
21:40acrichtounclear if we&#39;re far in the realm of &quot;diminishing returns&quot;
21:40aturonso to be clear, i don&#39;t think this situation calls for any immediate action (other than discussion)
21:40acrichtoI don&#39;t mind opening an issue
21:40acrichtoand continuing discussion there
21:40aturoni&#39;m not sure either, but i&#39;d at least want to get hit on this anotehr time or two before investing time in it
21:40aturonyeah, that seems good, can basically write down the above discussion
21:41acrichtok will do
21:41aturonare we basically using rust-central-station as the core place for infra issue tracking?
21:41aturon(at least stuff that doens&#39;t obviously belong elsewhere)
21:41acrichtothat&#39;s what I&#39;m thinking
21:41aturonthat sounds great
21:41aturontwo things:
21:41acrichtothis is sorta &quot;rust central tation needs integration testing&quot;
21:41acrichtoso it naturally fits there anyway
21:41aturon1. we should move it to rust-lang
21:41aturon2. everybody on the infra team should be Watcing the repo on github
21:42frewsxcvHi, I&#39;m here, sorry
21:42aturonfrewsxcv: o/ no worries!
21:42aturonlike, over time we&#39;re going to accumulate a bunch of issues that we won&#39;t want to tackle right away but need to track
21:42acrichto(did that 20 seconds ago)
21:42aturoni&#39;ll try to give some thought to how to keep the issue tracker manageable from the get-go
21:43aturonso we don&#39;t end up with another mess that simul1 has to clean up :)
21:43aturonany other thoughts/questions re: the nightly outage?
21:43aturonoh actually, there was one other thing
21:43aturoni would really like to think about some *super* lightweight way to communicate infra status
21:44aturonlike, it&#39;d be great if there was a page people could go to when a nightly wasn&#39;t being produced, or, ahem, appveyor was keeling over
21:44simul1Static page with manual update
21:44simul1Hosted by GitHub
21:44carols10centstwitter account
21:44aturonyes :)
21:44aturontwitter seems maybe best?
21:44acrichtoif we optimize for lightweight, my preference --
21:44aturonin that it&#39;s super easy to post
21:44acrichtobot on IRC where when we ping it it tweets
21:45simul1Twitter might be easiest, yeah
21:45aturon(btw, in case you haven&#39;t seen it:
21:45aturonacrichto: ok, i bet that exists
21:45aidanhsacrichto: exactly my thougt :)
21:45aturonso, who wants to set up a twitter account and hook that up?
21:45aturonfirst to volunteer gets to pick the handle :)
21:46ericktwe could setup a statuspage a la
21:46ericktI know there are a few open source versions out there
21:46aidanhsI volunteer
21:46ericktit&#39;d be nice to have a
21:46aturonerickt: yeah, i agree in the medium term, though we&#39;ll want a twitter account regardless IMO
21:46aturonaidanhs: ok, will note as action item!
21:46aidanhsname will be &quot;python_infra&quot; unless anyone has better suggestions ;)
21:46simul1Twitter is probably faster
21:47simul1Rusty Rust
21:47* misdreavus gets pinged at mentions of twitter >_>
21:47aturonMOVING ON :)
21:47aturonsimul1: you had an item about additions increasing complier download size; wanna talk about that?
21:48simul1Well, I can try on phone
21:48aturonsimul1: ah, sorry -- will you be on computer later?
21:48steveklabnikwe have a status in the work
21:48simul1Basically we are shipping artifacts for all reps on crates.ii
21:48steveklabnikwith a twitter handle
21:48aturonsteveklabnik: oh? ideally these would be a single thing
21:48aturon(i think)
21:49simul1aturon, yeah, will be at computer later
21:49carols10centsaturon: oh i thought you wanted a twitter account just for build status and infra other than
21:49carols10centswhich is why i didnt mention it
21:49aturoncarols10cents: well hm, i dunno
21:49steveklabnikaturon: long ago when we had the first outage i registered @cratesiostatus
21:49aturonthere is a user/contributor split here
21:50steveklabnikand then, gave it to carol when she started doing more work
21:50simul1Might be nice to have user status and Dev status
21:50aturonbut i&#39;m not sure outages are common enough that it&#39;s worth having two?
21:50steveklabnikyou want outage info to be *very* focused
21:50steveklabnikbecause you want to be able to alert on it reliably
21:50steveklabnikso, having two accounts for user/contributor makes sense t ome
21:50carols10centsi&#39;ve been tweeting things like new features deployed to too
21:50steveklabnikbut i am not on the infra team so do whatever :)
21:50aidanhsI personally would like to be able to go to one place to see if something is up with the rust ecosystem
21:51aidanhssomething focused doesn&#39;t sound like it&#39;d fulfill that?
21:51carols10centsi mean, we can create a dashboard aggregating everything eventually
21:51aturoncarols10cents: hm, it seems like that should be split out tbh. kinda like we have rustlang, and status info should live elsewhere
21:51carols10centsfor user/developers
21:51carols10centsyeah i probably havent put as much thought into the tweets as i could have
21:52steveklabnikSTORY OF MY LIFE
21:52simul1Temporarily, just setup a page that hosts N Twitter feeds
21:52carols10centssteveklabnik: lol
21:52aturoni think personally i&#39;d like to start with a single account for infra outages of any kind
21:52aturonand if we find that&#39;s not working well, we can split out later
21:52aturoni think it&#39;s easier to go that way than the reverse
21:52carols10cents10 twitter accounts and they all retweet each other
21:52simul1That seems fine re 1account
21:52aturonok, back to simul1, sorry!
21:53misdreavusit&#39;s possible to rename handles, if you want to convert @cratesiostatus into @RustStatus
21:53ericktaturon: we could setup a simple redirect of to the twitter page
21:53aturonerickt: +1
21:53carols10centswait but simul1 still isnt at a computer yet right?
21:53aturonerickt: (noted as action item)
21:53simul1Aturon, let&#39;s table the size until computer
21:53aturonnext up, a note that highfive doesn&#39;t let you r? a team
21:54aturone.g. doesn&#39;t work
21:54aturonit&#39;d be nice if it did but i still think we can just do this manually
21:54simul1More so, it seems to unassign and assign no one iirc
21:54aturonah, seems bad
21:54misdreavusyes, i&#39;ve done that one accidentally
21:55aturonwell so i dunno what to say about highfive in general
21:55aturonour version is not really maintained
21:55aturonmeanwhile, servo&#39;s is, but has diverged quite a bit
21:55aturonit&#39;d be great to clean this all up at some point, but seems pretty low-priority
21:55simul1Me too. I took a look at homu and was discouraged really
21:55aturon(but if somebody wants to take it on, shout!)
21:56aturoni&#39;ll open an rcs issue about figuring out our highfive story in general
21:56aturonand until then, we&#39;ll just have to sit on feature requests
21:56Diggseyaturon: so which team does rustup fall under...
21:56simul1+1 to just sit
21:57aturonDiggsey: dev-tools
21:57aturonDiggsey: i think nrc didn&#39;t list separate peers since brson is part of core devtools
21:57aturonDiggsey: but we&#39;re in the middle of a meeting here, can talk more later
21:57Diggseyoh sry
21:57aturonok, next, someone linked to
21:57simul1That&#39;d be me, doing issue triage
21:58aturonmy inclination is just to close, i don&#39;t think we want to spend time on this, especially since homu is also unmaintained for us
21:58* aturon is sensing a bit of a theme here...
21:58simul1I have no strong feelings personally, close seems fine
21:58aidanhsI can see both sides of the argument, so another close vote
21:58frewsxcvvote to close
21:59aturonthat&#39;s the spirit
21:59aturonsimul1: i&#39;ll leave that to you
21:59simul1If anyone wants to do that feel free, otherwise can aturon note that?
21:59aturonsimul1: will do
21:59aidanhsaturon&#39;s getting into the hardline issue closing spirit
22:00simul1Close everything
22:00aturonok, last ad hoc topic, there was a question about S-waiting-for-team
22:00aturon&quot;if an issue hasnt been updated for (say) a week, theres no new information a stale PR reviewer has to update it, so we may as well only review those updated in the last 2 days or so. This makes it more explicit that this is a bit of a black hole&quot;
22:00aidanhsoh yeah, me
22:01aturonthat proposal makes sense to me
22:01simul1I review all prs because I don&#39;t trust the GH update note
22:01aturonaidanhs: want to PR against the forge?
22:01aidanhscool, I&#39;ll change the filter etc
22:01simul1But seems fine
22:01aturonsimul1: fwiw, acrichto did an audit and things mostly looked good
22:02acrichtooh not for that issue
22:02shep2 days might be sketchy considering weekend
22:02acrichtothat was for somethignn else
22:02aturonshep: note that we have an assignee on weekend,
22:02aturonbut yeah that doesn&#39;t leave us buffer
22:02aturon3 is probably safer
22:02aturonacrichto: ah ok, must&#39;ve misremembered
22:02aturoni&#39;d say we can leave it at reviewer discretion
22:02acrichtoI&#39;m not sure I quite follow this update though, what&#39;s cahnging about the waiting-for-team piece?
22:02aturonif simul1 looks at all of &#39;em each week, that&#39;s good
22:03shepand a week would allow everyone one chance to look at it
22:03aturonacrichto: basically that you don&#39;t need to look at ALL of them each time
22:03aturonacrichto: only ones that have recent updates
22:03aturonbecause otherwise, there&#39;s nothing for you to do
22:03aidanhsacrichto: it&#39;s more my observation that once it gets in that status, only team action can rescue them
22:03simul1There&#39;s like 5 anyway, no
22:03acrichtoheh I haven&#39;t been looking at any of them
22:03carols10centsi haven&#39;t been looking at all of them each time
22:03carols10centsacrichto: much same
22:03aidanhswell then we&#39;re just formalising unofficial policy :)
22:03acrichtomy assumption for a change here is &quot;week old PRs get a ping to the team to plz take action&quot;
22:04acrichtoalthough now I&#39;m curious
22:04aturonopening sentence of S-waiting-on-team PRs: &quot;All PRs should be processed&quot;
22:04acrichtowhat are we looking at these for?
22:04aturonas per
22:04acrichtoI thought it was just for T- tags?
22:04carols10centsyeah that!
22:04simul1T- tag and potential status change
22:04aturonthe current directions include checking for status change
22:04carols10centsFirst, ensure that the status tag matches the current state of the PR. Change the tag if necessary, and apply the procedure for the new tag now. Verify that there is a T- tag for all PRs that remain in this category.
22:04shepacrichto: cause a team will finish looking at it and forget to set it back
22:04carols10centsright, but if there is no change
22:04acrichtoah ok that makes sense
22:05acrichtook this change cgtm
22:05aturoncarols10cents: yes, that was exactly aidanhs&#39;s point that kicked off this discussion
22:05acrichtomakes sense
22:05carols10centsah ok.
22:05aturonthat basically, we only need to do T- tag when initially setting, and then otherwise look only when something actually got updated
22:05shepI agree with the change; just would say to only show &quot;changed in last 7 days&quot;
22:05aturonalright onward!
22:05aturonso a couple weeks back we set out some medium-term projects
22:05aturoni was hoping we could chat a bit about the cargobomb one
22:06aturoni know that some work has been happening, was curious where things are, if there&#39;s a clear direction, how others can help etc
22:06aturoncc frewsxcv, tomprince :)
22:06simul1Tomprince and aidanhs I think have been working a lot
22:07aturon(brson is away today, just to be clear)
22:07simul1Or it was frwsxcv?
22:07aturonit was frewsxcv
22:08aturon(just giving them a chance to type :)
22:08frewsxcvtomprince and i have been working a bunch
22:08aidanhssimul1: lol I was wondering what I&#39;d been sleep-coding
22:08tomprinceWe&#39;ve mostly been doing cleanup work, and getting a handle on the codebase.
22:08simul1Ah I misremembered
22:08frewsxcvsorry, busy with work right now. but there&#39;s not a super clear mid-long term direction
22:08aturonfrewsxcv: oh np
22:08frewsxcvmainly just us cleaning up the codebase in a way that makes sense to us
22:08aturonthat&#39;s a good place to start!
22:08aturonbut yeah, it seems helpful to set some kind of initial goal
22:09aturoni just don&#39;t have a good sense for where we are right now --
22:09aturonlike, can the tool reliably be run (by brson)?
22:09aturonis the output usable (by not-brson)?
22:09simul1By anyone?
22:09frewsxcvi&#39;ve got an instance running to
22:10frewsxcvhopefully should be pretty straightforward from the steps in the readme
22:10steveklabniki had tried but couldn&#39;t quite get it going
22:10steveklabniki sent in a patch to get it building on windows at least
22:10aturonhow reliable is it? like, crater tends to require quite a bit of babysitting
22:10simul1And how long?
22:10frewsxcvnot a lot of babysitting from my experience
22:10frewsxcvfull run on the machine i have it on takes like four days
22:11aturonand afaik it&#39;s possible to run it on a more limited set of crates
22:11steveklabnikbrson said 72 hours
22:11steveklabnikfor him
22:11aturonso i know that nmatsakis at least has complained that the output is hard to use relative to crater
22:11aturoni&#39;m not sure exactly why
22:11nmatsakismmm crater isn&#39;t great either
22:11nmatsakistoo many clicks
22:12aturonah ok, that&#39;s a more general point
22:12nmatsakishowever, I have an issue describing some thoughts
22:12nmatsakisalso, I&#39;m not really here :)
22:12aturonnmatsakis: on cargobomb?
22:12aturonhm, so one other question --
22:12aturonhow is the &quot;job management&quot; side of things?
22:13aturonlike, a great place to get to would be to let anybody on the rust team do an AWS run and get results
22:13aturonbut i don&#39;t know if it&#39;s set up for that at all at this point
22:13tomprinceIt isn&#39;t setup for that at all.
22:13simOkay I&#39;m at computer in theory
22:13aturontomprince: ok, is it more like pure batch execution right now?
22:13ericktwhere is cargobomb run from? a short lived aws instance?
22:14aturonbrson runs it somewhere on aws
22:14tomprinceWe&#39;ve talked a little bit about figuring out how to distribute work, but we haven&#39;t started to work on that at all.
22:14tomprinceI know I haven&#39;t yet dug into that part of the code yet.
22:14aturontomprince: even before distributing work, just keeping track of requested runs could be helpful
22:15ericktack, it&#39;s docker container ubuntu 15.10?
22:15aturonso basically: my gut instinct here is that we should try to get this to some kind of MVP state
22:15ericktlooks like I got my first fix to cargobomb queued up
22:15aturonerickt: :D
22:15aturonMVP here, i think, is basically that (1) people other than brson can use it with aws backing and (2) these different uses are tracked and results delivered separately
22:16aturonlike, what we&#39;ve always wanted was `@cargobomb: test`
22:16aturonbut until we get there, having to do some manual process is ok, as long as we can make it relatively usable
22:16tomprinceWell, right now, it is just a CLI app, that generates results in a local directory.
22:16tomprinceI think having a story about distributing work is somewhat tied into figuring out how to turn it into a service.
22:16aturonin principle others can run the old crater, but in practice it requires too much care
22:17tomprinceEven if as a first pass, all the jobs are run locally.
22:17aturonfair enough
22:17aturonok so with all of that said, is there some MVP goal we can see our way to here?
22:17aturon(or even increments in that direction)
22:18aturonor still need more time digging into the existing code to get clarity?
22:18frewsxcvsorta that
22:19frewsxcvthough i am starting to have thoughts beyond that. nothing concrete yet. niko&#39;s issue might be a good next step
22:19tomprinceYeah. Perhaps frewsxcv and I can also take some time outside this meeting to talk about strategies for having a MVP usable by others.
22:19ericktaturon: I can whip up some stuff to setup an isolated AWS environment pretty simply
22:19aturonyeah -- i guess my feeling though is that getting it usable at all comes before improving the output
22:19aturontomprince: frewsxcv: ok, that sounds great!
22:20aturoni think we just need to have an MVP mindset, and try to identify the minimal goal that will allow this to be usable by the whole rust team, even if inefficiently
22:20tomprinceIt would probably be useful if people tried to run it, and gave us some feedback on the result.
22:20simulacrum_I&#39;m fine with an action item of &quot;try to run cargobomb&quot;
22:21simulacrum_(for whole team and who can does)
22:21aturonok noted!
22:21tomprinceI haven&#39;t run it on a large set of crates, but for the tiny example I&#39;ve run, it seems straightforward.
22:21aturonalright we&#39;re coming close to the end, simulacrum_ want to dive into your earlier item?
22:21simulacrum_I guess I can
22:21simulacrum_So basically every time we add a dep, we need to ship it
22:21simulacrum_And that can increase compiler size quickly
22:22aturontomprince: (oh and i should mention, as part of an MVP we likely need this to be hosted *somewhere* rather than asking people to tie up a machine for 72 hours)
22:22aturonacrichto: did you see the notes on this size issue in the agenda?
22:22acrichtosimulacrum_: are you worried about a particular outcome specifically?
22:23simulacrum_For example, last stable was 273 MB, beta is 320, and nightly is 383
22:23* acrichto looks at the agenda
22:23simulacrum_(this is for macOS, iirc x86_64 ubuntu is larger)
22:23acrichtosimulacrum_: do you know if that&#39;s related to deps?
22:23simulacrum_96% sure but uncertain
22:23tomprinceYeah, definitely.
22:23simulacrum_I&#39;m worried that we&#39;re going to get people who say &quot;oh no rust compiler is so big&quot;
22:23acrichtolast I checked we had an order of magnitude less code in src/vendor than src/llvm
22:24acrichtoand I&#39;d be pretty skeptical of binary sizes as well
22:24acrichtocargo.exe is like 10MB and has dozens of deps
22:24simulacrum_I may be wrong though
22:24acrichtorlibs could be massively bloated though and we&#39;re shipping more of them
22:24carols10centsyeah like... if this is bad for us, it&#39;s also bad for everyone using rust
22:24shepI have noticed similar growth over the channels (from the playground)
22:24aturonhm, but *something* is getting bigger
22:24simulacrum_It&#39;s somewhat hard for me to speak to what&#39;s the cause I just know that there&#39;s lots
22:24acrichtoah ok
22:24acrichtoso this definitely is a worry of mine
22:24acrichtoI think it&#39;d be worth drilling into what&#39;s what here though
22:24aturondo we have an open issue on it?
22:24acrichtoin terms of distribution size
22:24shepnightly => 541 MB; stable => 496 MB
22:25acrichtoshep: what&#39;s that a measurement of?
22:25simulacrum_I can investigate if we want as an action item
22:25acrichtosimulacrum_: that&#39;d be awesome yeah
22:25aturonsimulacrum_: <3
22:25shep&quot;Compressed Size&quot; of the docker container with toolchain + top 100 crates in debug and release mode
22:25acrichtosimulacrum_: I&#39;d categorize into &quot;soruce changes, rlib changes, and binary changes&#39;
22:25simulacrum_sounds good
22:26aidanhsoh one thing I forgot to put on the agenda if are done
22:26acrichtosome of this may be rustc regressions, not dep regressions
22:26aturonaidanhs: go for it
22:26aidanhscan we split up the network spurious failures?
22:26aidanhswe have a lot, and many are not the same
22:26aturonomg yes please
22:26simulacrum_git vs non git?
22:26aidanhsfor a start
22:26simulacrum_Or even more fine grained
22:27simulacrum_I&#39;m fine with, git (github), and other
22:27simulacrum_Does that sound good?
22:27aidanhsI&#39;d be happy with a starting point of nobody ever referencing the umbrella issue again
22:27aidanhshowever the chips fall, I don&#39;t mind
22:27acrichtoalthough we need a bucket
22:27simulacrum_I&#39;ll open issues
22:27acrichtowe can slice out big chunks but there&#39;s gotta be some way to have an issue for everything
22:27aturonacrichto: just open a new one if it doesn&#39;t fit existing buckets?
22:27aturonissues are cheap
22:28* aturon says as simulacrum_ and steveklabnik cry
22:28acrichtosimulacrum_&#39;s breakdown sgtm
22:28acrichtoone-off issues just tend to not do much
22:28aidanhsoh yeah, the umbrella should be open, it&#39;s just not useful to see it in a retry reason without additional justification
22:28acrichtothey&#39;re fire and forget and closed a year later after no activity
22:28aturonacrichto: A-spurious though
22:28aturonthere&#39;s some nuance here
22:28aturonbut anyway, we can start with simulacrum_&#39;s proposal
22:29acrichtoyes that&#39;s fine
22:29aidanhsok maybe sub-issues, and then if it&#39;s known temporary (appveyor) then give an additional few words?
22:29aidanhs(and use umbrella)
22:29aidanhsbut ok, simulacrum_&#39;s idea then see next week
22:29aturonthis all reminds me, i talked with jdm about intermittent tracking, he pointed me to
22:29simulacrum_I&#39;ll open issues :)
22:30aturonthat&#39;s basically a ready-to-go solution for tracking spurious failures at fine grain
22:30aturoni don&#39;t know what would be involved in setting it up
22:30aturonaidanhs: feel like kicking the tires on that?
22:31aidanhsI can take a superficial look
22:31aidanhs(dunno how long twitter bot will take)
22:31aturonah right, forgot you already had stuff assigned
22:31aturonsuperficial look is totally fine
22:31aturonbasically just need to figure out if it&#39;s worth our time to try to set it up
22:32aturonwe&#39;re over time
22:32aturony&#39;all are fantastic
22:32aturonthanks for stepping up to do this work every week
22:32aturonhave a good weekend
22:32simulacrum_Thanks all!
22:32simulacrum_Especially aturon for leading discussion
22:33aidanhsthanks all, special thanks to simulacrum_ for being issue triage hero over the past week
22:53simulacrumacrichto: ping
22:56acrichtosimulacrum: pong
22:56simulacrumSo just trying to reproduce the bug, and possibly because I&#39;ve rebooted my build server, I get [23/-1] authentication required but no callback set as the cause for failed to update
22:56simulacrumyet &quot;ssh; works
22:56acrichtosimulacrum: do you have ~/.gitconfig
22:57acrichtowith insteadOf ?
22:57simulacrum[url &quot;ssh://;]
22:57simulacrum insteadOf =
22:57simulacrumI have that
22:57simulacrumNo clue
22:57acrichtocargo needs to handle this
22:57simulacrumI don&#39;t even know why
22:59simulacrumI don&#39;t understand how this worked before
23:00acrichtosimulacrum: cargo didn&#39;t clone from github
23:00acrichtob/c there were no git references
23:00acrichtobut now we&#39;ve got a git dep on cargo
23:00acrichtofrom the rls to cargo
23:00simulacrumNo, I mean like &quot;this morning&quot;
23:00acrichtooh lol
23:00simulacrumYou know
23:00simulacrumI feel like I might&#39;ve skipped copying that line
23:00simulacrumAnyway, I can&#39;t reproduce
23:01simulacrumNo idea what was wrong
23:01simulacrumSo I&#39;ll close the bug
23:01simulacrumacrichto: Thanks!
20 May 2017
No messages
Last message: 125 days and 9 hours ago