mozilla :: #releng

17 Apr 2017
14:31RyanVMdid something change on the windows build machines recently?
14:31RyanVMIOError: [Errno 2] No such file or directory: 'c:/builds/relengapi.tok'
14:31RyanVMhrm, who's buildduty with RO on holiday today?
14:32philorheh, you funny
14:33philorbuildduty's that button with a circle with an arrow in it, though I'm not sure why TH isn't showing my retriggers from quite a while ago
14:35RyanVMphilor: two hits on the same push makes me...nervous
14:36philorRyanVM: wouldn't surprise me in the least if the answer we'll never get is "running an SM build as the first job taken by an instance is broken"
14:37philorthough spot-106 looks less like an infant than like a lifetime loser
16:19CallekRyanVM: indeed nervousness about esr52, but I also note that glandium has been landing stuff that affects/changes how tooltool works on central...
16:19* Callek isn't sure where he is in landing things
16:20RyanVMthe retriggers came back green, so dunno
16:25grenadeaki, dragrom: are you happy for your puppet default changes to be merged to production?
16:26akigrenade: mine is transplanted to prod already
16:26travis-cibuild-puppet#1209 (master - 1b3fd29 : Rob Thijssen): The build passed. (
16:39armenzgrail: I'm building a Dockerfile from scratch; is it a bad idea to clone the braindump repo into it?
16:40railarmenzg: not really. If you want to reproducable images, I'd add -r some_revision
16:41Callekarmenzg: I'd also note that the docker layers are recreated based on whats specified in there, so if you recreate the dockerfile but have a layer cached locally that did the clone you won't get a new rev unless you have a different command for it
16:41armenzgrail: OK great! would you be ineterested on reviewing the code when ready? It is to allow generating allthethings.json inside of a Docker container
16:41railarmenzg: ooh, sure
16:42armenzgCallek: would you mind elaborating? or rephrasing?
16:42* Callek has been bothered by allthethings.json being in braindump
16:42Callekarmenzg: sure, I can try
16:42railan yeah, as Callek says, I'd combine all apt operations in one line
16:42travis-cibuild-puppet#1210 (production - 59b1c3f : Rob Thijssen): The build passed. (
16:42Callekarmenzg: docker commands are a set of unique "things" docker caches/sets teh output of each command as a "Docker Layer" which is the "image"
16:42rail+ apt-get clean
16:43Callekarmenzg: if the commands are static, then a change in only one layer may mean that prior layers don't get re-run, such as "hg clone ..."
16:43Callekits taken affect with apt-get stuff too
16:43railarmenzg: somethign like this:
16:44Callekarmenzg: since for example on apt-get, if you do apt-* and then apt-clean on a seperate line your overall docker size is much larger than if you do `apt-get clean` as part of the apt-get command(s) since then you won't have the apt-cache stuff in place for the layer
16:48armenzgCallek: rail I've pushed a new version; does it look like I got it?
16:49armenzgty for your feedback! I will be going into a meeting
16:49* armenzg needs to get ready
16:49* gcox tosses in a vote for "apt-get --no-install-recommends --no-install-suggests -y install" for any installs, since apt-get has started thinking it knows better than you.
16:49Callekarmenzg: so, to elaborate more, running a docker creation on a file like:
16:49armenzgalso, how can I connect to the container?
16:50Callekarmenzg: if you change the greeting line, after creating the image, and after checking in a commit or two to foobar, you don't actually get an updated created_at or a newer foobar
16:51armenzgonly a given layer is rebuilt w/o recreating the others, right?
16:56Callekarmenzg: well the algorithm has eluded me, I think it only gets done that way if you have the *other* layers locally, *and* all layers depend on the prior layer, so if you changed the date command you likely would re-do everything
16:56Callekthe idea here is that (in a perfect world) every layer is idempotent and would give the same resulting setup each time its run
16:56Callekthats not necessarily the case, hence why we have full output cached and saved rather than just re-running commands
16:57* Callek only learned of those gotchya within the past year, and never dived deeper
17:15gcoxKeep in mind that even an unchanged layer MAY have changed / bit-rotted. For example, a base "FROM: ubuntu16.04" changes over time, and the one I cached 6 weeks ago is different than the one you pull this week. Same as if you do apt-get update, you're pulling in package versions that aren't bit-for-bit identical with me/anyone else; OR if your apt-get update line is early in a dockerfile, it may never re-run for you.
17:15gcoxIn case bit-for-bit identical behavior is something you need to think about. I hear build people think that way. :)
17:18Callekgcox: yea, that was basically what I was trying to get at :-)
17:18Callek(thanks for being more explicit)
17:22heftigwhat needs to change when building FDE after the migration from aurora to beta?
17:24Callekheftig: you mean locally, or ... ?
17:24heftigi'm building and packaging fde for arch linnux
17:25Callekrail: bhearsum: any current advice for heftig here? -- or should this be some sort of followup?
17:25bhearsumthe story isn't finalized yet
17:26bhearsumthe most likely case for us is that we'll be doing partner repacks of vanilla beta builds
17:26heftige.g. i'm cloning the mozilla-aurora repo, using --with-branding=browser/branding/aurora, --enable-update-channel=aurora, and not adding MOZ_ADDON_SIGNING/MOZ_REQUIRE_SIGNING
17:26bhearsumwhich means the same binaries, but we'll change the branding and some prefs with distribution/distribution.ini
17:26bhearsumi'm not sure what that means for distro builds
17:27heftigthe blog post i just read says this happens tomorrow
17:28bhearsumtomorrow is when we stop building new aurora dev edition builds
17:28bhearsumbeta based ones will not be ready tomorrow
17:31heftighm, btw, just stumbled upon the mozilla-unified repo; should i use that one for all builds, release included?
17:31bhearsumi don't know anything about that one, gps might
17:35Callekheftig: at this point fresh clones of m-u and fresh clones of m-c will be in the same storage format... if you are building all on the same machine anywhere near the same time, using m-u (and/or caching it locally) would be best, since its all in one place, and shares a lot of the same storage
17:35Callekno huge wins if its always a one-time-fresh-clone though
17:35heftigyeah, that's the case
17:35heftigbuilding everything on a single machine with cached repos
17:36Callekyou could save yourself a few gigs of storage (and network time) if you used m-u then, and just used `hg up -r aurora` (or whatever) for the repo
17:36Callekand/or used `hg share` for source checkouts if you're worried about potential repo state stuff for your builds/source archives
17:36Callek(e.g. *.pyc files)
17:42heftigCallek: the build process makes a fresh clone using `hg clone -u <ref>` from the cached repo, anyway
17:43Callekheftig: yea, the cached repo could benefit from being mozilla-unified then
17:43Callekbut if you do a direct <ref> you should be good otherwise I think
18:04catleefubar: how can I tell if a user has l1 access or not?
18:04fubarcatlee: I&#39;m not sure that you can, short of asking the MOC or an LDAP admin
18:04gpsheftig: anything in automation [at mozilla] should be using `hg robustcheckout` with mozilla-unified as the --upstream repo
18:05catleefubar: ok, I&#39;ll ask moc
18:05fubarcatlee: *I* can, or someone in relops with access to hg, for example
18:06heftigcan&#39;t install a custom extension, unfortunately
18:06heftignor activate any standard extensions, for that matter
18:08Callekgps: tl;dr hef.tig is a distro maintainer creating builds for arch linux.
18:09gpsthen it shouldn&#39;t matter much
18:09gpsmozilla-unified is nice in that all the changesets are in one repo
18:09Callekyea, doesn&#39;t much matter, but can save some disk space
18:09gpsso if you can reuse a clone between builds, mozilla-unified can save time and space
18:56travis-cibuild-tools#1741 (master - 67bfaea : ffxbld): The build passed. (
19:46Callekmostlygeek: so I saw in #balrog, but I&#39;m not 100% sure what you mean by binary and what info we bake in....
19:47Callekmostlygeek: to maybe help slightly, we do bake changeset into the build itself, which has links through to treeherder, which has links down to the individual tasks that ran....
19:47mostlygeekhey Callek
19:47mostlygeekCallek: that sounds exactly like what i&#39;m looking for
19:48mostlygeekhow does one pull that info out? where is it?
19:48Callekmostlygeek: like an android beta -- oor... a desktop one
19:48Callekmostlygeek: about:buildconfig in the browser should have it
19:48Callekmostlygeek: not sure if thats a format thats good for your need
19:49mostlygeekCallek: is there a json file or something inside? i&#39;m thinking a web json api for services to query
19:50mostlygeekCallek: got a min to vidyo?
19:51Callekmostlygeek: sadly, today, only if it can&#39;t wait until tomorrow.... (I got some stuff I should get done house-wise, was going to stop in ~10min)
19:51* Callek *can* meet today though if theres a need
19:51* Callek thinks aki is another human who may know enough here
19:51mostlygeeki&#39;ll send a meeting invite for tomorrow
19:51Callekmshal: may know as well more about whats baked into the binary itself
19:52Callekmostlygeek: I don&#39;t think I can promise to know the answers here, but I&#39;ll certainly try :-)
19:52Callekmy calendar is up to date at least
19:57travis-cibuild-buildbot-configs#2543 (master - 04c68a1 : Rob Wood): The build passed. (
19:58mshalmostlygeek: you just want the hg revision? We upload json files like: or that have the repo/revision
19:59mshalthe target tarball/zip also has an application.ini file with that info
20:00mostlygeekmshal: so i was talking to mreid in telemetry and they don&#39;t have a good way to link the buildid/release channel info they get with an actual build+meta data
20:02mostlygeekthe missing link between what the data the user sent and what/when/where we built the release is what i&#39;m digging into
20:02mshalwhat&#39;s the data look like that you get?
20:03mostlygeekit&#39;s a mess of json ;)
20:03mostlygeekultimately it comes down to buildid .. are those fairly unique
20:04mostlygeekthey seem to be granular down to a second
20:04mshalbut not unique enough?
20:05mshalor what&#39;s the problem?
20:05mostlygeekthat might be unique enough
20:06mostlygeekmshal: i think using the buildid in the json is a good enough start on this
20:06mostlygeekthanks for the tips
20:09mshalmostlygeek: you may be able to use the buildid to find what you need in the Taskcluster index
20:10mshalCallek: are the things we ship all under &#39;signed-nightly&#39;? Or is the releases index better for something like this?
20:22catleeCallek: do you know what &quot;Can&#39;t Do Much! You can&#39;t see or issue tokens. If you think this is incorrect, head to #releng for help.&quot; means?
20:22catleefrom relengapi token managemnet
20:26mshalCallek: and why did we want to put both signed and unsigned nightly artifacts into the index again? I can&#39;t come up with the reason :/
20:28KWiersocatlee: that&#39;s shown in the case where the user does not have can_view AND does not have can_issue:
20:28KWiersowhich get set at
20:35catleeKWierso: thanks
20:35catleebut what sets that....
20:51KWiersocatlee: initial_data, but I&#39;m unsure where that gets set
20:51KWiersohere&#39;s where it was introduced a few years ago:
20:52KWiersobeyond that, \_()_/
20:53catleea maze of twisty passages
20:58Callekmshal: because there are artifacts tests need in unsigned, that are not present in signed
20:58Callekmshal: iirc
20:59Callekmshal: has this come up elsewhere?
20:59mshalCallek: the tests can get the artifacts from the task graph though, not the index, right?
20:59mshalI&#39;m mostly wondering in terms of directing people where to get artifacts - ideally we&#39;d have a single place with the final artifact, rather than intermediate products
21:00Callekcatlee: I don&#39;t recall offhand what that means
21:00Callekmshal: we may need to revisit in a wider scale somehow, I&#39;m not sure
21:00Callekmshal: as far as tests, our CI tests can get from task graph, but we may need to have it for local users running things
21:01Callekmshal: this is *feeling* like we need a gecko.v3 soon somehow
21:01mshalhow so? I think we could just make gecko.v2.X the signed version, and not have signed-nightly
21:02Callekmshal: but then we lose gecko.v2.X for unsigned?
21:02Callek(normal CI)?
21:02mshalhmm, why would we lose it?
21:02Callekbecause we&#39;d have nightlies on same rev as unsigned CI
21:03mshalnightlies are already in the separate index though, right? Eg: gecko.v2.mozilla-central.nightly vs gecko.v2.mozilla-central.revision/pushdate/latest
21:04Callekhrm, maybe
21:05CallekI recall we went back and forth and had a bunch of last minute revelations to cause us to go with the current design
21:05* Callek has to run, revisit tomorrow?
21:05mshalyeah, which bug was that? I can revisit and open a new one if it seems worthwhile
21:05mshalsure thing
21:07KWiersocatlee: so, info is put into initial_data here
21:08KWiersowhich gets things from
21:26akimkaply: coop: are either of you mozilla-partners admins? wondering if i can get write perms to the funnelcake repo for bug 1348127
21:31mkaplyaki: I am.
21:32akimkaply: nice. do you have login perms to partner-repack1 as well? (do you know what controls the access list?)
21:32mkaplyWhat&#39;s your github acount?
21:33mkaplyI do not have login-perms for repack
21:33akimkaply: escapewindow
21:33mkaplyI just invited you to the org that has write access to funnelcake
21:43mkaplyaki: Of course the other person who knows partner builds is nthomas|pto
21:43akiyup :)
21:44mkaplyaki: I want to get to a point where it&#39;s common knowledge how to build partner builds (and even my team can do it). Feel free to make that happen :)
21:44akij.lund may have done it once, and may have access; i pinged him in #releaseduty
21:44akii&#39;d love for it to be more automated
21:44akibut for now i&#39;m on the hook to get it done manually while n.thomas is out
21:49travis-cibuild-buildbot-configs#2544 (production - f950939 : ffxbld): The build passed. (
21:49travis-cibuild-buildbotcustom#1030 (production-0.8 - af5399b : ffxbld): The build passed. (
21:50travis-cibuild-buildbot-configs#2545 (master - 4ec3b4d : ffxbld): The build passed. (
21:53travis-cibuild-tools#1742 (master - c5091ca : ffxbld): The build passed. (
18 Apr 2017
No messages
Last message: 6 days and 8 hours ago