mozilla :: #rust-infra

9 Sep 2017
00:04simulacrumactually, never mind
00:12simulacrumacrichto: aturon: If someone could delete or move to deprecated or w/e that'd be great. It's now in the rustc-perf repo
00:12simulacrumnot sure what policy is
00:33acrichtosimulacrum: will do
01:06acrichto> This cleanup is likely to take all weekend
01:06acrichtoalright folks no more builds until monday
01:08simulacrumWell, I guess that makes getting us to a waiting-on-bors for the whole queue state feasible at least.
01:52tedacrichto: FYI 10 years ago all Firefox unit tests ran on special unittest builds that did a build + ran tests
01:53tedin like my second year at mozilla i basically split out the tests into separate jobs all by myself
01:53acrichto"it's possible to change!"
01:53tedit was...a lot of work
01:53tedif i had a do-over i would have made the tests have a source checkout from the very beginning
01:54tedthat would have simplified things, i think
01:54tedyou could just unpack the build results and pretend like you had done a build
01:54tedbut we might have still even been using CVS at that point
01:55acrichtogood lord
01:55acrichtobut that's also pretty interesting
01:56acrichtob/c in that sense
01:56acrichtoyou're slinging gigabytes of build artifacts back and forth
01:56acrichtobtwn linux and osx
01:56acrichtonot that that's bad per se
01:56tedi don't think it's quite *that* bad
01:56tedbut yeah
01:57tedright now we're dumb and we pack up all these static test files that live in the repo, upload them, then download and unpack them in the test jobs
01:57tedwe looked at optimizing it and realized we were reinventing VCS
01:57tedand we should just clone the repo again
01:57tedi think gps might actually have enough things in place to make that workable now
01:58tedhe had patches to do it for web-platform-tests, but it wasn't workable at the time because our windows amis were too slow cloning the repo
01:58tedsomething about if you store the ec2 snapshot in ebs it streams things out painfully slow on first-use
01:58acrichtoI always just assumed things were slower on windows
01:59tedso many problems in build/test automation wind up down these weird dumb rabbit holes
01:59acrichtolol you're not kidding
01:59teddid you see our went-on-forever bug after we switched the firefox mac builds to be cross-compiled?
01:59tedor did i tell you about that in pdx
02:00tedi actually summarized it in the "user story" there
02:00acrichtooh no I know it had been a long time coming
02:00acrichtoand only recently learned that it actually got turned on
02:01acrichtobut congrats!
02:01tedthe problem as described there was bad enough, but then when we tried to change the paths in the docker images we fell into other holes
02:01tedwe wound up taking a custom valgrind patch to be able to fix our valgrind tests
02:01notriddleWindows can run on commodity hardware. I'm pretty sure that helps a lot.
02:01acrichtosimply b/c of the build path things randomly failed?!
02:01tedacrichto: well, it turns out our docker image generation is not deterministic
02:02tedso we had this valgrind failure that showed up anytime we did something that tried to update our build image
02:02tedand we had just been kicking that can down the road
02:02tedrebuilding the docker build image caused it to pull in a newer libllvm, which is used by mesa, which gets loaded in firefox as the graphics driver
02:02tedthe newer libllvm had a new leak
02:03tedso valgrind would complain and fail the test
02:03tednormally you just put in a valgrind suppression when a leak crops up in a system library
02:03tedbut valgrind has this long-standing bug where if a shared library gets unloaded before exit valgrind drops the debug info for it so it can't report a stack
02:03tedso you get like "leak in ?????"
02:04acrichtooh dear
02:04tedbut thankfully we do employ the authors of valgrind
02:05acrichto"how convenient!"
02:05tedso sewardj wrote us a patch and we're using it for the valgrind we're using in automation, and then we were able to write a suppression for it
02:07tedi think there might be some other bits i'm forgetting in there
02:07ted...but it was quite the process
03:22est31acrichto: simulacrum: what do you think?
03:23est31it seems like 90% of the reports in cargobomb are added warnings
03:23est31+ crates that did #![deny(warnings)] in some form
03:23est31should I file issues for such
03:25WindowsBunnyConsumesAnimalFleshest31: What if you do a second cargobomb where you cap lints so deny(warnings) doesn't break stuff?
03:25est31does that work?
03:26est31I can filter out the bad reports manually
03:26est31its just 114 reports :P ... but its definitely better to have it auto-filtered
05:09acrichtoest31: oh nah for warnings we don't track those
05:09acrichtob/c they don't break deps
05:09acrichtoiirc there's a bug on cargobomb to fix this
05:09acrichtouse something like RUSTFLAGS to --cap-lints for everyone
05:10est31I'll triage it then
05:10est31the list should be very short
05:17est31why is this recorded as test failure??
05:22acrichtoest31: Sep 03 20:22:07.188 INFO kablam! su: No module specific data is present
05:22acrichtomaybe doctests failed?
05:22est31acrichto: that message is present in other places as well
05:23acrichtooh weird
05:23est31this is recorded as test-pass, everything perfect
05:23acrichtomaybe the doctests segfaulted?
05:23acrichtothey didn't print out "x tests passed"
05:23est31there are a couple of such reports
05:23est31I'll put them into the cargobomb bug
05:24est31last time I could observe similar behaviour as well
05:24est31seems its reproducible for the ceramic crate
05:25est31lol my own crate is inside the list
05:26est31its fortunately fixed on master already xD
05:26est31maybe I should do a point release...
05:26est31but then again, it doesnt affect dependencies
05:58WindowsBunnyConsumesAnimalFleshest31: does cargobomb actually work on windows yet?
06:19est314 bugs reported
06:40est31much less breakage
06:40est31but far more false positives
08:02est31one of the 3 bugs isnt even a valid bug
08:03est31and 2 others will most likely be closed with "expected breakage"
08:03est31one bug remains
08:03est31for which they've done a crater run
08:03est31but obviously they've missed the one regression
16:00acrichtoest31: thanks so much for triaging that!
10 Sep 2017
No messages
Last message: 11 days and 11 hours ago