mozilla :: #rust-infra

7 Sep 2017
00:34est31simulacrum: is it really useful to have rollups of such a big size?
00:34est31why not halve them
00:51aidanhsest31: why would you halve them? (also, rollups are automatically generated so halving is easier said than done)
00:52est31later on bisection is more precise
00:52est31if there are any regressions
00:52est31and I've thought it was half-automated
00:54aidanhsyou should still be able to bisect into the merge
00:54aidanhsI'd think it'd be very rare for the difference between master branching for the PR and the point of master before merge makes a difference
00:55aidanhscombined with the general unlikeliness of rollups causing regressions
00:56est31also the second issue: there is a certain non zero likelihood that some pr might not build successfully
00:56aidanhs describes the process
01:20simulacrumest31: Yeah, I don't see the point in making rollups smaller -- presumably they won't fail, so making them smaller is inefficient
01:20simulacrumIf they fail legitimately, it's usually relatively easy to know why
01:55aidanhsdoing PR triage, "S-waiting-on-bors PRs: All PRs should be processed" two pages of PRs
02:51simulacrumaidanhs: don't worry about it if you don't want to
02:51simulacrumnot *too* important
04:36nrcacrichto, simulacrum: do we still build save-analysis data for nightlies only?
05:02acrichtonrc: hm I forget?
05:02acrichtopresumably if beta is failing?
05:02nrcyeah, so that is what it looked like from where the panic was happening, and looking at my local disk, the analysis directory did not exists
05:03nrchowever, I think it might be a red herring
05:04nrcyour hypothesis that the installer is using rls-preview for the name of the directory was correct, and addressing that fixed the issue. I've no idea why that prevented the save-analysis data being generated though
05:04nrccurrently waiting for dist to finish locally, but it is looking promising
05:04acrichtonrc: lemme find the line
05:05nrcI found where I thought it happened in bootstrap - both in the rustc shim and where we set the env var, and neither have a 'nightly' check
05:06acrichtoright yeah
05:06acrichtoI remember that was there before
05:07acrichtonrc: locally you did --enable-extended?
05:07nrcalthough I did it before without, which may have tripped it up
05:08acrichtooh yeah rustbuild wouldn't know to go and redo that
05:46nrcOK, I'm confused
05:46nrcrunning without --enable-extended then with repro'd the error from Homu. But clean and then run it did not repro it.
05:47nrcI thought because I fixed the problem, but my change got stomped by the submodule update
05:49nrcso ftr, the tar ball doesn't take the component name, but the directory inside the tarball does (I'm not actually sure if that is what rustup will expect)
05:56nrcok, trying again...
05:56est31acrichto: wow that was a fast r+
08:05nrcok, I am baffled - literally have the rls-beta tar ball and no errors, how could this happen when the bots fail?
10:21aidanhssimulacrum: nah I went through them, just a depressing reminder of the current bors madness
10:40kennytmbrace yourself. mac errors are coming (again).
14:15acrichtonrc: left a comment on thread
17:37acrichtoedunham: did you log in to easydns?
17:43edunhamacrichto: nope
18:18aturonacrichto: i did
18:18aturonacrichto: did i screw something up?
18:19aturon(i was working on mailmaps, and plan to continue doing so shortly)
18:19acrichtoaturon: nah I just get an email and wanted to confirm
18:19acrichtoonly two people on this planet should have the password
18:19acrichtoso if it wasn't you I'd be very scared
18:20aturonhah, gotcha
20:26arielbysimulacrum: p.rl.o is "Updated as of: 9/4/2017, 1:33:53 PM"
20:26arielbywhy don't we have new data?
20:36acrichtolarsberg: if you'd continue to like heads up, looks like travis deployed some update to their linux workers that broke our test suite
20:44acrichtoarielby: hm not sure why that did that, but if you visit the /perf/onpush route it updates things
20:44acrichtoso the site should be updated now
20:44acrichtonot sure why that didn't happen automatically
20:44larsbergacrichto: thanks for the heads-up!
20:45larsbergI generally see them go by in #embassy too, though I don't check that slack in... shall we say real time :-)
20:49arielbyacrichto: now its "Updated as of: 06/09/2017, 3:28:15"
20:49acrichtoI see Updated as of: 9/5/2017, 7:28:15 PM
20:49misdreavusthat may be the same timestamp, rendered across different timezones/locales?
20:50arielbylooks like it
20:50misdreavuswhy it doesn't have yesterday's data is another question
21:45arielbyany idea what's up with the perf bot?
21:45arielbyit didn't make any commit since Date: Wed Sep 6 09:12:09 2017 -0400
21:46arielbyanyone can look at its logs?
21:49acrichtoarielby: have we had a commit since then?
21:50arielbyacrichto: several?
21:50acrichtonothing looks awry on the perf bot
21:50arielbylast commit is 9 hours ago
21:50arielbyis it running?
21:51acrichtohm something got wedged
21:51acrichtoit's now unwedged
21:52arielbyis it running tests?
21:52acrichtonow it is
21:58acrichtosimulacrum: hm did you change something recently w/ perf collection?
21:58acrichtoit's getting wedged w/ like funky zombie processes
22:06simulacrumacrichto: Hm, I don't think so -- I upped file limit (ulimit -n) to 4096
22:06simulacrumbefore that it was running out of files, not sure
22:07ericktaturon: aw drat, I got a work meeting conflict for tomorrow
22:48simulacrumacrichto: We can delete the perf collector on ec2
22:49simulacrumNot entirely sure what needs to go into that, but it's not being used
22:51acrichtoill do that soon
8 Sep 2017
No messages
Last message: 12 days and 16 hours ago