mozilla :: #taskcluster

13 Sep 2017
00:14glandiumjonasfj: ping
00:22jonasfjglandium: What up?
00:22glandiumjonasfj: hey, could you add me the scopes for windows builders?
00:23jonasfjTo what worker types to so what?
00:23glandiumjonasfj: I want to try to build git-cinnabar for windows on tc
00:25jonasfjso I'm not sure how good or bad the windows workers we have are at cleanup
00:25jonasfjie. if it's too easy to poison them, then it's probably not a good idea...
00:26glandiumI'll keep using appveyor for now, then
00:26jonasfjI'm thinking this is a good question for pmoore, ie. what workerType would be appropriate and to what extend do we have to look out for poisoning..
00:27jonasfjglandium: ask pmoore tomorrow, I don't think it's impossible... otherwise I'll keep you on my short-list of guinea pigs for when I have windows VMs on tc-worker with QEMU engine :)
00:31glandiumcome to think of it, I could probably cross-build. That wouldn't handle tests, though
06:42pmoore|awayglandium: are you looking to build git-cinnabar from a github CI?
06:42pmoore|away... ah yes i guess so if you are using appveyor
06:43pmoore|awayglandium: indeed you can also do this via taskcluster with a .taskcluster.yml in your github repo - which user/org is the repo in?
06:45pmooreglandium: also, which windows OS version(s) do you want to build on? windows server 2012 r2?
06:46pmooreah i guess*
06:47pmooreok, i'll add worker type win2012r2
06:47pmooreyou can use that ;)
06:48glandiumpmoore: no risk of poisoning?
06:51pmoorewin2012r2 isn't used for gecko, so it should be fine
06:52pmooreit is just a general purpose worker type for running stuff on win2012r2
06:52pmooreif you want any caches, let me know, we can grant you scopes for caches too
06:52glandiumwell I also don't want a PR to be able to poison
06:54pmooreglandium: i think that would only be a problem if you decided to share caches across PRs and the master branch
06:56glandiumpmoore: if caches are the only way things can be polluted, then ok
06:56glandium(I don't want caches anyways)
06:57pmooreyeah, each task runs under a separate user account, created just for that task, so they shouldn't be able to interfere with each other. the task users are non-admins
06:58pmooreglandium: here is a sample win2012 taskcluster-github integration:
06:59pmoorenote - that is for a different workerType (win2012r2-cu) but it gives you an idea how to construct the task definition
07:00pmooreglandium: it also mounts a git installation, as an example how to use a toolchain that isn't installed on the base instance
07:02glandiumpmoore: what's the difference between win2012r2 and win2012r2-cu?
07:04pmoorewin2012r2-cu is used only by generic-worker CI, where "cu" stands for "current user" - the worker runs in a mode where it runs the task commands as the current user of the existing process, rather than running sandboxed under a dedicated user account
07:04pmoorefor the generic-worker CI, we need to be able to run the worker code that spawns users, so we need the task to be able to run in the same privileged state as the generic worker that runs the CI
07:05pmooreso it is something just for the generic-worker CI only
07:05pmooreit is because we run the generic worker to test the generic-worker (a bit of a chicken/egg situation) but the generic-worker running on the CI is a stable version
07:06pmoorea bit like hosting git source code in a git repo etc
07:07pmooreor writing a c compiler in c
07:07pmooreyou get the idea :)
07:08pmooreglandium: fwiw i really like appveyor - if that already serves you, that might be enough. if you want to integrate into other task graphs etc then i can see why it is nice to run in taskcluster
07:09pmoorefor a long time i ran the generic-worker CI in appveyor - it was only when i needed to run the CI on win7/win10 etc that i switched to using taskcluster-github
07:21glandiumpmoore: I have a test task that has been pending for 5 minutes, is that expected to have such pending times?
07:21pmoorelet me check
07:22pmooreglandium: looks like some instances are starting up:
07:23glandiumoh, I didn't know about this page, nice
07:26pmooreglandium: "set" should work (rather than "env")
07:27pmoorebut happy to see it is working :)
07:28pmooreglandium: so those pending times were only because there were no running instances - if there is a non-empty pool, you can expect tasks to start pretty much immediately, or around 5 mins if the pool is empty
07:29glandiumpmoore: I guess those workers are not used a lot :)
07:29pmooreexactly! :)
07:30pmooreglandium: i'll upgrade that worker type today to the latest version (10.2.2) - currently it is on 10.1.6
07:30pmoorethe main change is that you'll be able to coalesce tasks if you want to
07:48glandiumpmoore: I guess it's a rather pristine environment with nothing like mingw installed?
07:49pmooreglandium: this is the *full* environment setup:
07:50pmooreso it has e.g.
07:50glandiumand git 1.9.5
07:51pmooreah i'd forgotten about that
07:52glandiumah cygwin
07:54pmooreyes, there is an sshd running too, which is pretty nice
07:54pmooreglandium: but i think msys will get installed as part of mozilla build
07:55glandiumit's kind of awkward that payload.command is not the same thing on those workers as on linux workers
07:56glandiumeach item on win2012r2 is a command ; on linux workers, each item is an argument to a single overall command
08:02pmooreyes, this is one of my biggest regrets/mistakes, i'm sorry about that
08:03pmooreglandium: in reality, shell command interpretation is different on windows/linux - in linux, the shell tokenises the command line, whereas on windows, the command does it, so it is not standard
08:04glandiumI know that, but doesn't make the task definition difference less awkward
08:05pmooreglandium: there are some nice yaml tricks for making the .taskcluster.yml easy to read though
08:06pmooreglandium: e.g. here there are several linux commands, and not using the annoying && syntax:
08:07pmoorewhich indeed looks slightly different to:
08:07pmoore(or rather
08:08pmooreglandium: i highlight this, as this should hopefully save you some time/pain
08:09pmooreglandium: btw the g-w upgrade to 10.2.2 is underway, so at some point, you might notice that the worker version number bumps
08:13gerard-majaxdustin, followup, just in case somebody else get caught by this: it's a github-side change:
08:42glandiumpmoore: what formats are supported for mounts, apart from zip?
08:43pmooreglandium: see
08:43pmoorerar tar.bz2 tar.gz zip
08:46glandiumgah... and I guess 7z is not installed
08:46glandiumI might as well create a task on a docker worker that munges the original file and puts it in a format that generic worker likes, as an artifact
08:47glandiumthat will also have the advantage of acting as a cache and avoiding hitting an arbitrary server every time
08:55gerard-majaxtar.xz as well
08:55gerard-majaxpmoore, we added tar.xz for myself :)
08:55glandiumgerard-majax: it's not in the doc!
08:57gerard-majaxI don't know how that should have been done to end up in the documentation :/
08:58glandiumgerard-majax: that's not the same thing at all
08:59gerard-majaxoh, for mounts
08:59gerard-majaxI thought it was about artifacts
08:59gerard-majaxnevermind then
09:00pmooreglandium: isn't it here? c:\\mozilla-build\\7zip
09:02pmooreglandium: see e.g.
09:03pmooremsys is in there too
09:03pmoorejust set the PATH to the tools you want, like in this example
09:04pmooreglandium: the only difference between nss-win2012r2 and win2012r2 is that win2012r2 also has AZCopy, and nss-win2012r2 is a dedicated worker type for NSS
09:22pmooreglandium: the workers should all be upgraded now, fwiw (not that it will affect you much)
10:12jhfordbstack: in statsum, is it possible to 'flush' the collection? e.g. I want to collect stats once per hour, and at the end of that collection I want to submit those stats and reset the counts/measures
10:29jhfordin otherwords, could we have series which are 'per-iteration' instead of per-5m or per-1h
10:40pmooregrenade: a heads up - i just saw this:
10:40firebot ASSIGNED, Make Win10 SDK (minimum v10.0.10586.0) required for building Firefox
10:41pmoorenot sure if that will impact OCC
10:46grenadePretty sure we're already installing it. Will check after lunch.
11:11garndtEmail on dev-platform said automation is already using 10.0.14393
11:12pmooregrenade: ^
11:12pmooregarndt: thanks!
11:13garndtNp sir
11:34pmooregarndt: we decommissioned the scheduler, right? so i can remove queue:scheduler-id:* scopes from roles/clients that have them?
11:34pmooresee with search string "queue:scheduler-id:"
11:36pmooreah yes:
11:36firebotBug 1259627 FIXED, Stop using TC scheduler API
11:37* pmoore plays kerplunk with roles
11:39pmoorehmmmm, maybe i'll hold off, maybe the queue still respects scheduler-id - silly me
11:41pmoorequeue.createTask still needs it
11:43pmoorejhford: do you object to me removing Scheduler and SchedulerEvents from ?
11:43pmoore(see bug 1259627)
11:49pmooreok there are still scopes with suffix "scheduler:" in - i'll get rid of those
11:49pmooreand update the bug
11:50pmoorerail/garndt/bstack/dustin: let me know if you see any issue with removing these from our roles/clients:
11:55grenadepmoore: i've added a note to clarify that tc win builders are using the version specified in the bug.
11:55firebotBug 1380609 ASSIGNED, Make Win10 SDK (minimum v10.0.10586.0) required for building Firefox
11:56jhfordpmoore: nope! no objections from me
11:57pmoorejhford: done! :)
12:11pmooregarndt: do we have a tracking bug for decommissioning the scheduler? (i think it is still running:
12:18garndtUnsure at the moment. Lets sink up with brian later to come up with a final set of things to check
12:18garndtSync up I mean
12:27pmooregarndt: i created 1399437 - we can always dupe it if another bug is out there
12:27pmoorebug 1399437
12:27firebot NEW, Sunset the scheduler
12:36pmoorebstack: ^
12:37dustinI like "sink up" better
12:38dustinI think funsize is still using it
12:38dustinbut the replacement is landing shortly
12:39wcostaaki-away: scriptworker requires python3, right/
12:41pmooredustin: i think that landed in
12:42dustinoh, great
12:42dustinmaybe we're good to turn it off then :)
12:43catleeyeah, pretty sure rail got rid of it for funsize too
13:01pmooredustin: do you agree i can remove queue:scheduler-id:gecko-level-1 from* since it has assume:moz-tree:level:1 which already has it? (and will keep tcadmin happy)
13:01pmoore(just a second pair of eyes to avoid i do something stupid!)
13:06catleegarndt: is something you want to do with the new system?
13:06firebotBug 1154027 NEW, File diagnostics bugs directly from slave health
13:14garndtcatlee: it's a requirement we haven't discussed yet. My initial impression would be that it's out of scope (and lower priority) but would need to discuss more
13:15dustinpmoore: hm, sounds right to me
13:15dustinthanks for using tcadmin
13:17pmooredustin: it is a nice tool! update done. cross *at least* your fingers
13:17* pmoore stands on his head
13:19pmooreftr, this is the *dry run* result now:
13:20pmoorebut don't worry, i'm staying well clear of* :-)
13:22* pmoore looks forward to parameterised roles (intentionally using the british english spelling here)
14:20pmoorearr: if you are feeling brave, i've landed an OS X patch for coalescing tests on macOS:
14:20pmoorebut it needs merging from default -> production
14:20pmoorenote, that is just a prerequisite for landing the gecko change that enables it
14:25pmooreah, i guess this is probably i think i should request from buildduty
14:25pmooreaselagea|buildduty: are you ok to merge default -> production for me in puppet, or do i need to schedule that formally etc?
14:26aselagea|builddutyyup, on it
14:26pmooreaselagea|buildduty: you da best! :))))
14:26pmooreaselagea|buildduty: i would just like to say in advance. I AM SO SORRY for all the fallout.
14:26pmoore.... just kidding - it will be fine ;)
14:27pmooreaselagea|buildduty: i'll be around for the next couple of hours anyway, if there is any fallout
14:29aselagea|builddutydone :D
14:43gerard-majaxbisecting on mozilla-inbound is really slow just because of taskcluster downloads :(
14:44gerard-majaxprobably because the artifacts it downloads are not mirorred close to me
14:45dustinyeah, you're probably hitting cloudfront, and the EU endpoints wouldn't have the files..
14:46gerard-majaxlikely, but hard to verify for sure, mozregression does not give me more infos
14:46gerard-majaxand likely after the files I hit will get mirrored :)
14:52RyanVMare there known issues with Windows workers on Try?
14:53RyanVMmy push to Try from 50min ago still has pending Windows builds
14:53dustinpmoore|mtg: ^^?
15:00pmoore|mtgdustin: RyanVM: hmmmm, i'll take a look :/
15:01Aryxthank you, see
15:03aki-awaywcosta: correct, python3. but the script doesn't have to be
15:06pmoore|mtgRyanVM: oh boy, it looks like we might have a problem indeed
15:08pmoore|mtggrenade: do you know what this might be caused by?
15:13pmoore|mtgClear-Disk seems to come from but that commit landed a long time ago and was working so i don't think it is at fault
15:17bstackjhford: I don't think it's possible with the interface it exposes at the moment, but it should be possible to hack in I think?
15:18jhfordso for the provisioner work coming up, it'd be great
15:19jhfordright now, we're doing the iterations in question once per hour, so if we look at the 5m graph, we should only get a single iteration per dataset... i think? does that sound right
15:19bstackpmoore|mtg: there were still some tc things using the scheduler when I checked last week, but I think garndt took care of them? I'll check again today when I'm actually on a computer.
15:20pmoore|mtgbstack: no worries! if you find anything, feel free to dump it to the list of TODOs in bug 1399437
15:20firebot NEW, Sunset the scheduler
15:20bstackYeah, I think that's right.
15:21bstackYou can set things to interpolate differently in signalfx to get the 5-min stats to look a bit better.
15:21garndtbstack: I can check the audit logs, I have to double check to see if docker-worker and mozilla-taskcluster are doing the right things now
15:21garndtthose were the only two things I believe
15:25bstackI believe so, yeah.
15:25grenadepmoore. i think (hope) it might be a fluke. the error is something we see when the ami creation instance is a dud.
15:25grenadedid you see it on more than gecko-1-b-win2012 ?
15:25* pmoore checks
15:26pmoorei only see it on that worker type, yes
15:27pmoore to clear partition table on disk 1
15:27* armenzg goes through node/npm/yarn and Heroku dance
15:27pmoore(lol because the url didn't paste properly)
15:29grenadei've seen the error when the os has failed to init properly. many of the built in ps functions fail with the same "no such function errors"
15:29pmooreah ok
15:29pmoorei'll roll back the ami ids manually
15:29pmoorethanks grenade!
15:29pmooreRyanVM: rolling back amis, see above
15:29grenadetrying a manual redeploy of gecko-1-b-win2012 now
15:30grenadeand tailing the ami log to verify
15:36pmooregrenade: RyanVM: i've reset the AMIs in the worker type definition to use the previous ones, workers should start coming online shortly (could take a while to clear the backlog though)
15:36RyanVMok, thanks for the update
15:37pmoorei'll bump the maxCapacity for the next hour to something enormous
15:39pmoore!t-rex: win2012 backlog should start clearing shortly
15:39dustinwcosta: ^^ is that working for you now?
15:42wcostadustin: yes!!!
15:46pmooregarndt: grenade: RyanVM: i've created bug 1399524
15:46firebot NEW, Building pending backlog for gecko-1-b-win2012
15:46garndtthanks sir
15:49pmooreAryx: sorry, i only just saw your message! i've cc'd you now to the bug
15:50pmooreyay pending dropped from 200 to 85
15:51RyanVMpmoore: most importantly, *my* builds are running now :P
15:52grenadepmoore: reattempt seems to have succeeded:
15:52grenadeor rather:
15:53pmooregrenade: awesome!
15:56* pmoore now checks if he broke anything with his puppet patch
16:00pmooreaobreja|afk: garndt: arr: this isn't looking good :((((
16:00pmooreah, it just dropped a couple
16:00pmooremaybe this is normal?
16:01garndt2k? normal really (not saying it's good, but normal)
16:02dustinit's the trend to look at
16:03garndtpending wait times are within the usual range for that worker type
16:05pmooreok, i see jobs being taken, e.g.
16:06pmooreif only we had coalescing hey .... ;)
16:10pmooregarndt: looking good - e.g. is with 10.2.2 and completed successfully
16:10pmooregarndt: i think we're ok
16:10pmoorearr: aobreja|afk: fyi ^^^ puppet change looks ok
16:33pmooreAryx: arr: !t-rex: RyanVM: i'll be landing the macOS test-coalescing bug shortly, and garndt has kindly offered to half-babysit it. it is only a gecko change, so rolling back is just a regular hg backout in gecko. it should only affect macOS and some win10 tests (gpu ones)
16:33pmoorein short, is that ok, and are you guys happy to back it out if it causes trouble? i hope it doesn't, but you know ..... computers
16:34arrpmoore: could you drop a note to firefox-ci, please?
16:35pmooregarndt would be the contact person in case of questions (and dustin reviewed all the patches so knows the changes best) .... i'm out sporadically tomorrow and fridays are not good for landing changes - so with it being early US time it seemed like a possibility
16:35pmoorearr: sure
16:35arrthat way we can either celebrate success or know what to look for if things go sideways :D
16:35arr(and it keeps folks not in this channel in the loop)
16:42Aryxpmoore: ok
16:42pmoorethanks guys!
16:42pmoorearr: email sent, and change landed on mozilla-inbound
16:43pmooregarndt: thanks for monitoring!
16:43arrpmoore: thanks!
16:56dustinjust tossing that out there?
16:57ulfrdustin: was meeting with jonasfj and garndt
16:57garndtno, we were in a meeting
16:58dustinit bears repeating..
16:58* ulfr sprinkles security fairy dust on taskcluster
17:27jmaherwhimboo: which thing are you pointing out?
17:33whimbooi watned to run the wdspec tests
17:33whimboobut none of the selected platforms actually contains them
17:33whimbooso we build instead of giving a failure
17:34whimboowould be really nice to know that in front if something is wrong
17:34whimbooand doesn't meet any criteria
17:34whimbooi wasted 2h of time on that :/
17:36dustinwhimboo: try fuzzy :)
17:36whimboovia mozreview?
17:37dustinI don't think that's hooked up yet
17:37dustinbut via './mach try'
17:37whimboothat's where I would need it :/
17:37whimbooahal|afk: ^ do we have a bug for the mozreview integration if we still want to do that?
17:44bhearsumbstack: i see you self assigned - i think i just found the PEBKAC after groping through the github API
17:45bstackwell that's good news :p
17:45bhearsumi found this:
17:45bstackthe github API is deeply confusing
17:46bhearsumwhich says i'm missing a scope that i thought i granted, which makes me think that Release events end up using a different Role than Push or Pull Request events
17:46bstackyep, they do. let me find the docs real quick
17:47bstackoh hey, look. that appears to be completely undocumented. I'll write the docs now I guess
17:47bstackthanks for filing the bug :)
17:48bhearsumhehe, np
17:48bhearsumlooks like it's :release based on
17:58bhearsumbstack: another thing i noticed, it looks like tag creation actually makes a commit event, not a release event
17:58bhearsumyou can kindof see this based on the error i got in response to creating a tag:
17:59bhearsumnot a deal breaker, but i think the docs claim that tagging on github triggers a release event
17:59bstackoh interesting. I'm trying to page all of this back in now.
17:59bstackok, I'll poke at that a bit and update the docs if it is wrong
17:59bhearsumnp, let me know if i can help at all
18:07armenzg_brbhassan: had you seen this?
18:08armenzgcool :)
18:12hassanarmenzg: haven't seen it. the author of the repo is in our office i believe ;)
18:22dustinhassan: ^^ might be worth connecting with gps
18:32camddustin: I was replying to ryan about how we handle ESR52 tier-3 settings and hiding going forward. I realized I had assumed those changes would be back-ported. But I shouldn't assume. :) Are you already back-porting your changes?
18:32dustinI think we call it "uplifted"
18:33dustinand, yes, but I don't know how practical that wil be to ESR52..
18:33camdahh right.
18:33camdwhy is that?
18:33dustinit's really old :)
18:33dustinand most jobs for ESR52 are still done by Buildbot
18:34dustinI don't know if Buildbot can configure a tier
18:34camdahh, ok.
18:34RyanVMthat was going to be my next question
18:34RyanVM"What about buildbot jobs?"
18:34camdI have hard-coded hiding buildbot jobs for esr52
18:34dustinfun :)
18:34camdThere are only a couple that applied to esr52, really. Gtest, I think.
18:34dustinso yeah, we may have to leave some of that in place until the ESR is dead
18:34RyanVMcamd: I'm assuming you've thoroughly audited the existing rules as part of all this :)
18:35* dustin looks innocent
18:35camdRyanVM: very VERY thoroughly.
18:35camdthough that doesn't mean something won't slip through. :)
18:36RyanVMoh nice, looks like we actually killed off the win pgo gtests at the scheduler level now
18:36* RyanVM doesn't even see them on Beta
18:36camdThere was a way to do the reverse of excluding in treeherder. Show ONLY the jobs that would be excluded in a repo
18:37camdso I was able to see what bb jobs got excluded and put those signatures into treeherder. I did so for each of the main repos (dev, release and such)
18:37dustinRyanVM: I'm curious, how bad could it be?
18:37dustinif there was, by chance, a flaw in the very, very thorough analysis
18:37camdThere was only maybe 12 signatures to hide, tbh.
18:37armenzgEli: should neutrino in general be a devDependency? It is not a package for the project itself but to bootstrap it
18:37camdIt may habe been very, very VEEEEERY thorough. Hard to say for sure...
18:37dustin(I've been assuming "not very" since it's easy enough to land a patch to the tree concerned, and until then you just look at some jobs you didn't want to see)
18:37armenzgI think we can use preinstall
18:38armenzgto add it
18:38RyanVMdustin: if we can hard-code things, not overly worried abou tit
18:38dustinBB does throw a wrench into that, since it's not easy
18:38dustinso you're comfortable with a few "we shouldn't see tasks x, y, z" and either fixing those with an in-tree landing (if possible) or hard-code in treeherder (if not)
18:39dustinat least one of the things we uncovered was the opposite: tasks were hidden that shouldn't have been
18:39dustin..which makes me sad since it means we're burning capacity on jobs we're explicitly not showing to anyone
18:40camddustin: was that hard to fix/find those?
18:41dustinthat case is easy: it will be "fixed' once you remove the exclusions :)
18:41camdheh, true. :)
18:42RyanVMyeah, doesn't sounds like the end of the world
18:42RyanVMany way to export the old rules in an easily human-readable format for reference?
18:42RyanVMalso, this will make for fun on Try pushes of older revs
18:42dustinyeah, the export I got was .. less than human readable :)
18:43RyanVMOTOH, those have been thoroughly broken lately by other TC changes anyway
18:44dustinin general I hope we're not hiding too much, especially on try
18:44camddustin: sorry about that... I could probably massage it to make it better, if that's worth it to you
18:44dustinno, it's OK -- I reformatted it, but it's big long lists of identifiers that don't quite line up with in-tree identifiers
18:44RyanVMbasically, for our new SV sheriffs that are we going to make it easy for them to realize that this new bustage is from a job that used to be hidden but now isn't for some reason?
18:44dustinit's funny that TH and the source speak such different languages :)
18:45RyanVMCC coop ^
18:45Eliarmenzg: yes devDependency, not app dep
18:46dustincamd: thoughts on some intermediate change where TH shows things but flagged in some way those folks can recognize as "should be hidden"
18:46dustinand file bugs for?
18:46dustineven if that's just in the job details, or something, whatever's easiest
18:46armenzgEli: why is it under app dep in here?
18:47camddustin: thinking. Not sure how we'd do that, tbh...
18:48Eliarmenzg: at the time it was easier to get heroku to install neutrino than trying to wrangle it into installing devDependencies
18:48dustinmaybe the SV sheriffs just run TH with exclusions disabled (by clicking the button) for a while and see what they spot?
18:48dustin"this is busted ,but it goes away when I click the button.. plz 2 fix"
18:49armenzgEli: is it easier now?
18:49camddustin: you can also query treeherder with a querystring param of ``visibility=excluded`` or ``included``
18:49camdit defaults to excluded
18:50camdif you set it to ``included`` then it shows only what is getting hidden
18:50Eliarmenzg: i think so but i havent tried it yet
18:50dustinyeah, I'm just thinking of a way for the sheriffs to find these issues before we throw the switch
18:51armenzgEli: roger that
18:51KWiersodustin: if we flip the switch before oct6, hopefully the non-SV-sheriffs are still around to catch and fix most of the fallout
18:52* dustin looking at some links to see if we're talking MOAB or bottle rocket fallout here
18:52camdOops, I had that backward. To see what's excluded, you'd use:
18:52dustinso I see a lot of GTest, still
18:53dustinand /cc sfraser funsize -- I think those are tier 3 already though?
18:54dustinhm, doesn't have extra.tier
18:54camdthose are from BB on esr52. As of today, TH will set those to tier-3 as they're ingested. We just haven't had any new ones today
18:54sfraserExisting funsize is in the excluded list from when it was broken, and never got removed
18:54camdBut it occurred to me that for the BB jobs, I could run a DB update query which sets them to tier-3 retroactively. I think that's worth it
18:55camdsfraser: yeah, as I recall when I saw them, they looked like they were all passing
18:55camdso shouldn't have been excluded anyway?
18:55dustinsfraser: oh, and I just realized those are the out-of-tree-generated tasks, which is why they don't have a tier
18:55dustinso nothing to chagne there
18:56sfraserNew funsize was going to be tier 3 but if I want to fold things in and remove 800 duplicate tasks they have to be tier 1 or beetmover and balrog can't depend on them
18:57sfraserCinema about to start, will catch further messages later
18:57dustinyep, I think that makes sense -- all those tasks are visible
18:57dustinso I see GTest on release, esr52, and (passing) beta
18:57dustinshould we try to land a patch to release and esr52 to mark that as tier3? and am I right in thinking they are green in beta so nothing to worry about in beta/central?
19:01camddustin: I think they're BB in esr52, right?
19:01dustinthe failing ones, yes
19:01dustin(passes on linux in TC)
19:03camdright, ok. I'll just update the entries in the DB so that they're tier-3 retroactively
19:03camdthat way we're already up to date
19:05dustinI also see OS X 10.7 opt tc[tier-2](B) failing on esr52
19:05dustinI think that's the last of it
19:05dustinthat is a TC job (cross-compile)
19:05dustinand explicitly tier 2 in the in-tree config, but easy enough to make tier-3
19:13camdOK, I fixed those GTest BB entries to be tier-3
19:25garndt!t-rex deploying new mozilla-taskcluster which removes the uses of the scheduler
19:31bhearsumcongrats on killing that :)
19:34garndtnext up, let's actually kill mozilla-taskcluster
19:40garndtoutreachy can't come soon enough
19:40dustinperhaps for the purposes of avoiding horrifying our Outreachy person, we should refer to it as "retiring" mozilla-taskcluster
19:40dustinat least initially
19:40dustin"sending it to a home"
19:40dustin"putting it out to pasture"
19:41garndtgoing old yeller on it?
19:44dustinhaha, yeah
14 Sep 2017
No messages
Last message: 7 days and 9 hours ago