mozilla :: #taskcluster

17 Apr 2017
15:16wcostadustin: I pushed a new patch version at bug 1350413
15:16firebothttps://bugzil.la/1350413 ASSIGNED, wcosta@mozilla.com Move macosx64 opt builds to buildbot-bridge
15:16wcostait does the magic in morph as you suggested
15:33dustinawesome :)
16:58* bstack is deploying all of our services with new lib-api. yell if you notice something awry
16:59ckousikbstack: Should I get diagnostics to run every 10 min for a while?
16:59bstackckousik: yeah, that sounds good :)
17:00bstackty
17:05ckousikdone, I'll keep an eye on diagnostics. Will let you know if something breaks
17:05bstack:)
17:07* dustin wishes luck
17:08bstackit's such a small change. I doubt anything will asplode
17:08bstackbut you never know
17:21dustinbstack:
17:21dustin Queue/createTask is idempotent with no self-dependency
17:21dustinin email
17:21bstackah. I see that now
17:22bstackwill roll back
17:22ckousikbstack: failing task
17:23bstackboth services rolled back
17:23* bstack reads the scopes to see what happened
17:26bstackhmm
17:26bstackthat email doesn't tell me that much that I can see :/
17:27bstackoh
17:28bstackI think it's just that a task was still in pending instead of running?
17:28dustinooo, pretty new spinner
17:29bstack:D
17:29bstackthe stark black on white seems a bit too eye-catching for me
17:29bstackbut I'm happy in general
17:30dustinthat's why I liked the blue :)
17:30dustinbut this is fine
17:31bstackpatches accepted :p
17:32ckousikI need to rework diagnostics
17:36bstackckousik: does https://pastebin.mozilla.org/9019229 look like a diagnostics bug rather than something I broke?
17:36bstackit seems like the task was still pending is the only issue
17:36bstackwhich I think might've happened because we ran it more frequently?
17:36ckousikdefinite diagnostics bug
17:36bstackckousik: it's pretty neat as-is, but we can always make it a bit nicer :)
17:36bstackok, cool
17:37bstackI'll roll forward
17:37ckousikbecause that test just passed
17:37bstackthanks for being on top of this, both of you!
17:37bstackok, well I did roll back
17:37bstackso it might've been me after all if it passed now
17:37bstackI'll roll forward and we'll see if we can make it pass then
17:37bstackis there a way to manually trigger that test?
17:38ckousikWe could just run it from the repository
17:39bstackok, 390 is going out again
17:40ckousikMy WiFi stopped working
17:41ckousikSomeone else needs to run it from the repository
17:42bstackeh, it's ok. I'm pretty sure it is working. the logs look good, I can trigger tasks, and nothing in sentry.
17:42ckousikWe need to run DEBUG='*:test' node lib/main.js
17:42bstackckousik: just let me know what the results are when it runs in 10 minutes again?
17:43ckousikI can't open anything
17:43ckousikMy phone data may not be able to handle the diagnostics page
17:43bstackah, ok
17:49garndtjonasfj: dustin will be looking into running our worker for linux talos tests. Is tc-worker in a state where we should go down that route or should we use generic-worker like we are for OS X?
17:50dustinoh good point :)
17:50jonasfjI think tc-worker is close... Or will get there...
17:51jonasfjI filed PR with integration test framework this weekend...
17:51jonasfjAnd will start porting and fixing tests Pete wrote...
17:51jonasfj*fixing the bugs his test found...
17:54ckousikbstack: All tests are passing. Will schedule it to run daily.
17:54dustinI think it's far enough out htat I'll shoot for tc-worker
17:54dustinwe have both in puppet now anyway
17:55garndtyea, we won't have the actual hardware until mid-july anyways
17:56bstackckousik++
17:56bstackckousik: looks like it failed again?
17:56bstackjust got an email
17:56bstacksame test as last time
17:56bstackseems like it is pending again
17:58ckousikI'll have a look at the daily logs. If this is a regular occurrence then better to fix this test
18:00bstackok, sounds good
18:00bstackI'm 99% sure this isn't due to my change, so I'm going to push on
18:02ckousikprobably not, but this test hasn't failed in a long time
18:04ckousikI'm sure it's a diagnostics bug
18:04ckousikMaybe it's the increased frequency of diagnostics that caused a failure?
18:12bstackquite possible
18:14bstackoh, I think I see what is happening, ckousik. Normally, the tests pass because the tutorial workertype isn't already on when the test starts. When run more frequently than daily, the worker is already turned on, and the task actually starts running in between
18:14bstackhttps://github.com/taskcluster/taskcluster-diagnostics/blob/master/src/diagnostics/queue_test.js#L53-L57
18:14bstackso the assertion fails
18:15ckousikI'll maybe we shouldn't compare that field. Just make sure the id is the same
18:15bstackyeah, I think that's a valid fix :)
18:23camdEli: I assigned a new treeherder-manifest PR to you in Bugzilla. I don't seem to have access to assign a PR to you in github. Are you able to grant me that?
18:27Elicamd: sure, ill take a look today
18:28camdEli: awesome, thanks man. :)
18:28Elicamd: np :)
18:41dustin!t-rex I moved about half of the passwords from lastpass to passwordstore
18:41dustindetails on accessing it are in https://github.com/taskcluster/passwordstore-garbage
21:44dustin[root@t-yosemite-r7-0048.test.releng.scl3.mozilla.com ~]# curl puppet:8020/v1/credentials
21:44dustin{
21:44dustin "credentials": {
21:44dustin "clientId": "assume:project:releng:host-secrets:host:com.mozilla.scl3.releng.test.t-yosemite-r7-0048",
21:44dustin "accessToken": "....",
21:44dustin "certificate":
21:44dustin"{\"version\":1,\"scopes\":[\"assume:project:releng:host-secrets:host:com.mozilla.scl3.releng.test.t-yosemite-r7-0048\"],\"start\":1492465404491,\"expiry\":1492811004491,\"seed\":\"Xjkouw5MTHSJhnkAiI289QXYlCdVWFTlCWJMwKM5f_fw\",\"signature\":\"OMR1A6Lde0lGRgEBzHLeDePc6MKuN7I2b+GzMmdXAbE=\",\"issuer\":\"project/releng/host-secrets/production\"}"
21:44dustin }
21:44dustin}
21:44dustinsorry, irccloud pastebin fail
21:44dustinsuccess!
21:44dustinthe clientId is wrong, but what can you do :)
21:45dustinexport TASKCLUSTER_CLIENT_ID_BASE="project/releng/host-secrets/host/"
21:58glandiumdustin: can we sort out bug 1356529 before your end of day? I may not have been entirely clear how this all works, so please shoot with questions
21:58firebothttps://bugzil.la/1356529 NEW, mh+mozilla@glandium.org Add a `mach artifact toolchain` option to get toolchains for use for a specific build job
21:58dustinsure
21:59dustinI don't know if it was unclear..
21:59dustinit looks like the mach command is re-running part of task-graph generation outside of the decision task
22:00glandiumdustin: only to find the index-path of the dependencies (and the mozharness config path, until bug 1356952 allows to get the tooltool manifest path directly)
22:00firebothttps://bugzil.la/1356952 NEW, nobody@mozilla.org Move as much tooltool manifest definitions as possible to taskcluster job definitions
22:02dustinit's the use of part of the task-graph generation that is at the core of my objection
22:03glandiumhow do you suggest one finds the dependencies for a given task given its name, and then the index-paths associated with those dependencies, without having a separate reimplementation of the same code, which would be even sillier?
22:04dustinwhy do you need to find the dependencies?
22:04dustinthe dependencies are already known when the task is created, so just inject them directly
22:05glandiumdustin: because the same code can run outside taskcluster. How do you suggest /that/ works?
22:05dustintask.env.TOOLCHAIN_BUILD_TASK_ID = {&#39;task-reference&#39;: &#39;<toolchain-build>&#39;}
22:05dustinthat should look in the index paths
22:05dustinbut don&#39;t try to find the task first and then the index paths -- find the index paths (using string substitution) then use the index to find the taskId
22:06glandiumok, let&#39;s back out a little, it seems we&#39;re not using the same terminology
22:07dustinwould you prefer vidyo?
22:07glandiumwebrtc works better, but yeah, some videoconferencing would probably streamline it
22:08dustinmy room
22:12glandiumI hope I can log on vidyo this time... it tends to want me to
22:13dustinit&#39;s pretty awful :(
22:13dustinI have a dedicated iPad now which is less awful
22:14glandiumI have a dedicated tablet, still requires me to login almost every time
22:14glandiumand it rarely works
22:14dustinI think they put their least-bad engineers on their iOS app
22:15dustinmost bad on Linux
22:15glandiumand I have a long password
22:15dustinsecond-most-bad on Android
22:15dustinyeah, me too :)
22:15dustinand a hard one to type on a tablet
22:17glandiumwhoohoo, got in
22:17dustin:)
23:02dustinhttp://gecko.readthedocs.io/en/latest/taskcluster/taskcluster/taskgraph.html#task-parameterization
23:35glandiumdustin: mach taskgraph target-graph takes 15s
23:35glandium:(
23:40glandiumthere doesn&#39;t seem to be any subcommand of mach taskgraph that runs under 15s
23:53glandiumtarget-graph&#39;s output is also not useful
18 Apr 2017
No messages
   
Last message: 94 days and 7 hours ago