mozilla :: #servo

10 Sep 2017
00:26docbrownCan I use RLS in the Servo tree somehow?
01:03travis-ciServo failed to build with Rust nightly: CC nox, SimonSapin, jntrnr
11:20noxNice issue.
11:20noxemilio: ^
11:49travis-ciServo failed to build with Rust nightly: CC nox, SimonSapin, jntrnr
12:30sewardjemilio: around?
13:52emiliosewardj: yes
14:05Mateon1I have a question, is it possible to disable Javascript's console logging (and uncaught exceptions) within servo? It makes it difficult to filter out actual Servo errors from what the webpage spits out
14:06Mateon1Also, it seems that console.error actually goes to stderr, while console.log goes to stdout, which is also slightly annoying
14:12est31what I've always wondered: does servo have an about:config like system
14:13noxest31: Yes,
14:13noxbut it's Sunday and I don't know how it's tweakable again.
14:13noxest31: Keyword is 'pref'.
14:13noxest31: We even have some DOM APIs that are prefgated.
14:34sewardjemilio: ping
14:35emiliosewardj: pong
14:35sewardjemilio: I pushed the fallible alloc patch, as you saw
14:35sewardjemilio: but got 2 check fails
14:36sewardjcontinuous-integration/travis-ci/pr failed because the build machine ran out of disk space, I think
14:37emiliosewardj: we don't block on that so that's fine
14:37sewardjemilio: /usr/bin/ fatal error: /home/travis/build/servo/servo/target/debug/deps/style_tests-2a65e68d55b4878a: No space left on device
14:37sewardjemilio: but the homu test also failed, and I am not clear why
14:37sewardjshell__2 './mach filter-intermittents ...' failed
14:38emiliosewardj: seems like your patch makes a few servo tests crash
14:39sewardjemilio: hmm, my try run was ok for Linux x64 Stylo-Seq opt
14:39emiliosewardj: jdm just commented
14:40sewardjemilio: how can I run those tests locally?
14:40emiliosewardj: that happens because servo using jemalloc
14:40emiliosewardj: but we're calling to the system malloc
14:40sewardjemilio: (cd servo && ./mach test-unit -p style) ran ok, too
14:41emiliosewardj: you need to cd servo && ./mach test-wpt <path>
14:42emiliosewardj: I can fix it real quick
14:42sewardjemilio: that would be good. (but how?)
14:42emiliosewardj: you basically need to add a cargo feature that doesn&#39;t go through system malloc
14:45emiliosewardj: can you tick the &quot;allow edits from maintainers&quot; bit in your PR?
14:46sewardjemilio: ok thanks (re fixing). Let me know if I can do anything
14:46sewardjk one mo
14:46sewardjemilio: done
14:59sewardjemilio: afk now, but back in about 20 mins
15:00emiliosewardj: just added a fix to your branch
15:00sewardjemilio: thanks!
15:19Mateon1I made a raw summary of what Servo spat out on the top 3.3k or so of alexa top sites, counted and sorted by num of occurences.
15:20Mateon1Tell me if anything is particularly interesting, I&#39;m slowly sifting through this on my own, but I don&#39;t know which messages are interesting
15:27sewardjemilio: thanks. Is there anything else I should do at this point?
15:27emiliosewardj: I don&#39;t think so, that should be all :P
15:30bjorn3Mateon1: 61 WARNING: YOU ARE LEAKING THE WORLD (at least one JSRuntime and everything alive inside it, that is) AT JS_ShutDown TIME. FIX THIS!
15:30emilioMateon1: there are a lot of them which look pretty interesting.
15:30bjorn3Mateon1: 27 ERROR:servo: assertion failed: address != MAP_FAILED
15:30emiliobjorn3: those probably depend on the spidermonkey upgrade anyway
15:30emiliobjorn3: (the JS GC messages etc)
15:31Mateon1bjorn3: The address != MAP_FAILED is related to allocator failure in ipc code
15:31bjorn3 14 ERROR:servo: assertion failed: first.offset <= last.offset
15:32bjorn3 9 ERROR:servo: assertion failed: self.mode == other
15:32Mateon1self.mode == other is related to RTL layout, already reported
15:32bjorn3 8 ERROR:servo: assertion failed: !self.Document().needs_reflow() ||
15:32emilioMateon1: I think the WR assertions (and all assertions not related to system call failures) would be nice
15:32sewardjemilio: excellent
15:33bjorn3 5 ERROR:servo: assertion failed: self.is_double()
15:33bjorn3 4 ERROR:servo: called `Result::unwrap()` on an `Err` value: HierarchyRequest
15:34bjorn3 3 ERROR:servo: assertion failed: !descendant_link.has_reached_containing_block
15:34bjorn3 2 ERROR:servo: index out of bounds: the len is 0 but the index is 0
15:34bjorn3 2 ERROR:servo: Float position error
15:35bjorn3 1 ERROR:servo: Found an unpaired surrogate in a DOM string. If you see this in real web content, please comment on Use `-Z replace-surrogates` on the command line to make this non-fatal.
15:35crowbotIssue #6564: Support surrogates in the DOM? -
15:35bjorn3 1 ERROR:servo: Each render task must allocate <= size of one target! (3761487)
15:35bjorn3 1 ERROR:servo: Each render task must allocate <= size of one target! (20022002)
15:35bjorn3 1 ERROR:servo: Each render task must allocate <= size of one target! (1599015990)
15:36bjorn3 1 ERROR:servo: called `Option::unwrap()` on a `None` value
15:36bjorn3 1 ERROR:servo: assertion failed: self.reflow(ReflowGoal::ForScriptQuery,
15:36bjorn3 1 ERROR:servo: assertion failed: !self.nodes.contains_key(&id)
15:36bjorn3 1 ERROR:servo: already borrowed: BorrowMutError
15:37bjorn3 1 Error deserializing JavaScript
15:38bjorn3I am done
15:40Mateon1Okay, great, I&#39;ll be filing all those
15:45bjorn3 4 assertion failed: self.load.is_none() (thread ScriptThread PipelineId { namespace_id: PipelineNamespaceId(0), index: PipelineIndex(0) }, at /shared/dev/rust/servo/components/script/
15:46bjorn3 2 ERROR:servo: assertion failed: ==
15:46bjorn3 66 Assertion failure: isEmpty() (failing this assertion means this LinkedList&#39;s creator is buggy: it should have removed all this list&#39;s elements before the list&#39;s destruction), at /shared/dev/rust/servo/target/release/build/mozjs_sys-6a7905cfd7acaabc/out/dist/include/mozilla/LinkedList.h:332
15:48Mateon1bjorn3: Should I also file duplicates? For self.Document().needs_reflow() there are a LOT of existing duplicate issues
15:48Mateon1But I have stacktraces and addresses that caused the panic
15:49bjorn3You could respond to an existing issue
15:50Mateon1There are 4 issues about the reflow assertion, #18288 and #14239 both open
15:50crowbotIssue #14239: assertion failed: !self.Document().needs_reflow() || (!for_display && self.Document().needs_paint()) || self.window_size.get().is_none() || self.suppress_reflow.get() -
15:50crowbotIssue #18288: assertion failed: !self.Document().needs_reflow() || (!for_display && self.Document().needs_paint()) || self.window_size.get().is_none() || self.suppress_reflow.get() -
16:00Mateon1Commented on #17631
16:00crowbotIssue #17631: assertion failed: self.is_double() -
16:10bholleyemilio: yt?
16:14Mateon1It seems that the index out of bounds is related to &quot;Too many open files&quot;, skipping those two
16:33WindowsBotDreamsOfElectricSheepMateon1: if you open too many open files on linux, then select stops working
16:34WindowsBotDreamsOfElectricSheepcausing index out of bounds errors
16:34Mateon1WindowsBotDreamsOfElectricSheep: Yeah, I set rather strict ulimits and timeouts while crawling, mostly because a LOT of websites cause a rapid memory allocation loop with the -x flag
16:35Mateon1Well, &quot;crawlink&quot;
16:35Mateon1Oops, typo
16:37WindowsBotDreamsOfElectricSheepWindows doesn&#39;t have any way to limit total handles afaik
16:37WindowsBotDreamsOfElectricSheepbut there&#39;s plenty of ways to limit memory usage of a process with job objects
16:41sewardjemilio: thanks for helping out with this.
17:31emiliosewardj: np! :
17:31emiliobholley: now I am
17:31emiliobholley: what&#39;s up?
17:32bholleyemilio: was just going to ask the question I asked in the PR
17:32sewardjabout the inlining you mean?
17:32bholleyah, ok_or
17:33emiliobholley: just replied there, but tl;dr we&#39;re using `ok_or(Error::new`, so I don&#39;t see how that could be optimized out
17:33bholleyyeah, the or vs or_else distinction always gets me
17:33bholleysewardj: would be interesting to know if that 1-line fix alone fixes the perf regression
17:33emiliobholley: could&#39;ve switched to ok_or_else instead, but it&#39;s just a trivial struct that it seemed easier to just inline it
17:34sewardjbholley: yes. I&#39;ll poke at the regression stuff tomorrow.
17:34bholleysounds good
17:34bholleysewardj: emilio: do you happen to know if fnv hash is supposed to give good distribution for pointers?
17:34* bholley is getting tons of collisions with his rule node ptr bloom filter thing
17:35sewardjno idea
17:35sewardj(but I would assume that it shouldn&#39;t care what the input really is)
17:35emiliobholley: no idea off-hand, sorry. Note that rule nodes are created consecutively, so probably that doesn&#39;t help much
17:35sewardjotherwise it&#39;s a poor hash function
17:35bholleyemilio: consecutively in what sense?
17:36bholleyemilio: the ancestors of a given leaf are created consecutively
17:36bholleybut in general I&#39;d think the leaves wouldn&#39;t be created consecutively
17:36emiliobholley: in that we allocate all the rule nodes while inserting in the rule tree, so they could get very close pointers
17:36emiliobholley: well, it depends on the page ofc
17:36bholleyalso, mozjemalloc bucketing might be hurting us
17:38bholleymbrubeck: did you see my comment about eq perf?
17:41emilioMateon1++ :_)
17:42Mateon1emilio: Oh, I&#39;m not done ;)
17:46Mateon1Question, if the assertion falls inside webrender code, should I report it to webrender issues, or servo issues?
17:46Mateon1To be exact: assertion failed: !self.nodes.contains_key(&id) (thread RenderBackend, at /shared/dev/rust/servo/.cargo/git/checkouts/webrender-c3596abe1cf4f320/01c38a2/webrender/src/
17:48emilioMateon1: that could be either a servo issue when building the display list, or a WR issue itself, I&#39;d report on servo first and probably mrobinson would like to take a look if it&#39;s reproducible
17:49Mateon1The stack trace indicates add_node <- Frame::flatten_item [4 times] <- Frame::create <- RenderBackend::process_document <- RenderBackend::run
17:55Mateon1That one is pretty fascinating, so I&#39;ll work on reducing that to a testcase after I report everything else
18:16* bholley finds his bug
18:17mbrubeckbholley: hadn&#39;t seen that yet. I&#39;ll fix that later today.
18:17bholley pub fn may_contain_style_for(&self, inherited: &ComputedValues, rules: &StrongRuleNode) -> bool {
18:17bholley self.cache().seen_rule_nodes.might_contain(&ReuseFilterKey::new(inherited, rules));
18:17bholley true
18:17bholley }
18:17bholleymbrubeck: cool thanks!
18:18* bholley facepalms over having spent time investigating hash collisions instead of noticing that |true|
18:30Mateon1&quot;Error deserializing Javascript&quot; is a false alarm, console.warn() on
18:49bholleyemilio: yt?
19:03bholleyManishearth: yt>
19:08bholleyManishearth: (I was going to check to make sure that hashglobe still lets us use Fnv and precomputed hashes where appropriate, but looks like it does)
20:03Mateon1Wow, debug mode is insanely slow. What took 41 CPU-seconds in release mode is not done in 70 CPU-minutes without optimization
20:18pcwaltonstandups: Fixing UI papercuts in the Pathfinder 2 demo.
20:18standupsOk, submitted #50722 for
21:16gwlarsberg: edunham: what&#39;s involved in changing the WR repo to be gated on appveyor? Is it possible for me to manage that (since we&#39;ve had reliability issues with it regularly)?
21:18larsberggw: edunham: just edit to add webrender to it and then one of us needs to deploy it out to the servo master
21:18gwlarsberg: ok, thanks!
21:18larsbergUnfortunately, deploying the update to the servo master (where homu runs) is not a lighthearted operation, so toggling back and forth quickly is not trivial
21:19larsberge.g., it&#39;s easy to pick up other saltfs changes we deliberately haven&#39;t deployed because they&#39;re risky or require multiple people on-call or a gap between servo queue builds, and it&#39;ll block homu deployment
21:19larsbergthough worst-case you can always ssh into the homu master, tweak the cfg file there and &#39;service homu restart&#39; but that&#39;s not recommended :-)
21:19gwlarsberg: ok. will probably try to enable it again later this week :)
21:20larsberggw: cool! Just wanted to make sure you knew all the corner cases. tbh, appveyor has been pretty reliable lately, though travis macos sets a low bar
21:23gwlarsberg: wow, the mac travis issues over the last two weeks o.o
21:25Mateon1I&#39;ll stop Servo at 150 mins of CPU time for that bug reproduction, that is ridiculous
21:42larsberggw: this has not been a great year, in general, for mac on travis. The last two weeks were especially bad, but it&#39;s been ~monthly that we&#39;ve had near complete tree closures. It affects servo/servo less than others, fortunately.
22:26bholleyemilio: you around?
22:42Manishearthbholley: yep
11 Sep 2017
No messages
Last message: 10 days and 19 hours ago