Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Declarative marks #17

Closed
igrigorik opened this issue Jun 27, 2016 · 24 comments
Closed

Declarative marks #17

igrigorik opened this issue Jun 27, 2016 · 24 comments
Milestone

Comments

@igrigorik
Copy link
Member

Today, to emit a user timing mark or measure you need to execute JS, however JS may blocked due to pending stylesheets, other blocking scripts, etc. It would be nice if there was a declarative mechanism to register a mark and/or measure via markup. E.g...

  • Server streams back the response in chunks
 <body>
  <p>lorem ipsum....</p>
  ...
  <user-timing mark="header"> <!-- or some such.. -->
  ...
  • UA receives a chunk and runs a preload scanner on it: it should emit a mark when it detects the declaration.
  • Some time later, the doc parser (re)processes the same chunk, executes JS, and emits another set of marks defined via existing JS APIs.

We've had similar discussions before around exposing timing of each HTTP chunk.. but that could be excessive and its hard to reason about (there is no way easy way to correlate a particular chunk to something you care about). On the other hand, exposing a declarative mechanism would allow applications to "mark" the significant points in the response stream and correlate them against other metrics.

@eliperelman
Copy link

eliperelman commented Jun 27, 2016

Love the idea. Just for clarification though; it seems your intent is to capture the timestamp when the UA parses the declaration, or something similar. It makes me wonder if this would ease other use cases by specifying that the timestamp should instead be captured once it's in the DOM/painted.

<div id="navbar">
  <!-- maybe default is parsed? etc. -->
  <user-timing mark="navigationLoaded" capture="injected" />
</div>

Thoughts?

@eliperelman
Copy link

eliperelman commented Jun 27, 2016

And then you could also use an element to capture the mark automatically when content is injected into the DOM:

// maybe the default is "injected" for JS API element, "HTMLUserTimingElement"
let el = document.createElement('user-timing');

el.mark = 'fullyLoaded';
document.body.appendChild(el);

@yoavweiss
Copy link
Contributor

yoavweiss commented Jun 28, 2016

I agree with @eliperelman that tokenization times may not be of high interest, but added-to-dom times, as well as paint times of nodes impacting the UI, would be.

One more thing we discussed during the F2F is how such declarative marks can be used to get more meaningful meaningful-first-paint.

So, maybe a timing attribute (with values such as add, paint, etc) on existing elements would make more sense? (where all nodes marked for timing would be considered part of the meaningful paint)

@plehegar
Copy link
Member

I prefer the idea of an attribute rather than an element. In addition, if you attach the attribute to a section, you could differentiate between the start of the parsing and its end (if there is a use case for it).

@igrigorik igrigorik modified the milestones: Level 2, Level 3 Jun 29, 2016
@igrigorik
Copy link
Member Author

I agree with @eliperelman that tokenization times may not be of high interest...

Depends on who you ask. The origin of this discussion is to answer exactly that -- e.g. to understand when the mark was scanned by preload scanner, because the parser is blocked.

@eliperelman
Copy link

Yes, which is why I think leaving that determination to the creator of the mark is the most reasonable. Scanned/parsed could be the default, with a way to override for paint, add, etc.

@yoavweiss
Copy link
Contributor

yoavweiss commented Sep 12, 2016

Depends on who you ask. The origin of this discussion is to answer exactly that -- e.g. to understand when the mark was scanned by preload scanner, because the parser is blocked.

What are the use cases stated for knowing the tokenization times? Note that implementation changes (e.g. moving tokenization from bg thread to the UI thread) may regress such times while having a positive impact on real life performance.

I clearly see a use case for nodes added to the DOM and/or painted to the screen as that has direct impact on users. I also see a use case for measuring when were requests issued, but that's already covered by ResourceTiming. I'm not sure what other user-impacting metrics would tokenization time give developers.

@igrigorik
Copy link
Member Author

You're streaming the HTML response and you want to know when some chunk of bytes are first processed by the browser -- e.g. if scanned by preload scanner, then it's that moment. If memory serves, both Ben (FB) and @bluesmoon have been asking for this for a while.

Yes, which is why I think leaving that determination to the creator of the mark is the most reasonable. Scanned/parsed could be the default, with a way to override for paint, add, etc.

That's an interesting idea. /cc @panicker as this kinda overlaps with the ~hero exploration...

<div time-it="parse, tokenize, render" mark-name="thing">...</div>

(vigorous hand waving here...) parse mark is emitted when div declaration is scanned; tokenize is emitted when div element is appended to the dom; render when it's visible. Or some such? =/

@bluesmoon
Copy link

Also, how does this relate to when a MutationObserver fires if it were monitoring the DOM that contains the node?

@eliperelman
Copy link

eliperelman commented Sep 12, 2016

@igrigorik your implicit mark creation for elements seems intriguing and I'm wonder if that is the root feature we are trying to achieve. Basically, what is the timing information for a particular DOM node. Having access to the timing information for a node seems to make having a mark name for it obsolete, as once you can identify a node, the name is really just a side effect of needing to identify something in userland.

<div id="timed" user-timeable>...</div>
const div = document.getElementById('timed');

DOMHighResTimeStamp div.timing.parsed
DOMHighResTimeStamp div.timing.tokenized
DOMHighResTimeStamp div.timing.rendered

So as long as you can already identify an element you are tracking as user-timeable, semantics around mark names seem kinda pointless. Thoughts?

@yoavweiss
Copy link
Contributor

IMO, exposing tokenization times increases the risk of developers measuring implementation details, and preventing implementations from improving in cases where those "implementation detail" metrics would regress, even if the user experience would improve.

So, given that risk, I'd love to hear from @bluesmoon and Ben why they want such a metric, and why they think it's valuable.

@csharrison
Copy link

When I last chatted with Ben I got the impression that the current situation today is so bad, literally any more information we can give which helps relay the state of the browser makes perf analytics easier for FB.

For FB, there were a few special cases in which Chrome was scanning preloads, but their fetches were being delayed (up to 150ms in some cases) due to the main thread being pegged by extension content scripts on desktop platforms.

This was a terrible user experience that confused them and they had to run trace analysis and reach out to us to continue investigating / improve things on Chrome's side. This is a case where scan time would have helped find the problem earlier (they get it server side rather than having to reproduce and analyze locally).

I agree exposing this in general seems dangerous though. Parse time and render time have many more use cases imo.

@bluesmoon
Copy link

bluesmoon commented Sep 13, 2016 via email

@spanicker
Copy link

I think we should focus on providing developers "user-oriented" metrics, and render time of critical elements definitely falls here.
I see parse/tokenize time as a diagnostic metric to help them answer - "this element was slow to render, was it network or CPU bound?". So perhaps it makes sense to expose these in addition as "diagnostics", although I would tend to avoid this in V1.
Render time should be the toplevel thing they should measure, totally agree with @yoavweiss here.

@bmaurer
Copy link

bmaurer commented Sep 15, 2016

I think we should focus on providing developers "user-oriented" metrics, and render time of critical elements definitely falls here.

I'd rephrase this and say that we should focus on APIs that help developers measure things that are currently difficult or impossible to measure. i.e., we should prioritize APIs by the value they provide over what currently exists. Understanding user-oriented metrics is equally as important as understanding the things that contribute to those metrics. If you perf tool tells you you suck but it can't help figure out why then it's not providing value.

In this case there's currently no API that provides us anything close to "when did this element come in from the network". We have to do really terrible and unreliable hacks to even get a guess at what this timing is. As charles mentions this has made some debugging problems much harder than they should be. We can provide a lot of value to developers by filling this gap.

IMO, exposing tokenization times increases the risk of developers measuring implementation details,

I agree that the concept of "tokenization" time is low level and may be implementation specific. What we really care about is "when did these bytes of data get to the browser". Our server sends multiple http level chunks to the browser as they become ready and we want to figure out when they arrive. Ideally I'd actually want this to be more of a network layer measurement but Http doesn't have a great way of naming/annotating chunks.

That said, knowing implementation specific times (Like tokenization, parse) can be very helpful. Even if they are different across platforms they can fill in a lot of context when you're trying to understand what happened based on client side data. Developers sophisticated enough to use these also have the time to learn the differences in implementation detail.

@igrigorik
Copy link
Member Author

So as long as you can already identify an element you are tracking as user-timeable, semantics around mark names seem kinda pointless. Thoughts?

I could buy that. However, I think this also conflicts with what @bluesmoon is describing.. Ideally an analytics script should not have to setup a document-wide MutationObserver and interrogate each and every node; there is a lot of value in surfacing marks via PerfObserver (async notifications, out of critical path, etc).

@bmaurer
Copy link

bmaurer commented Sep 15, 2016

The API @eliperelman suggests wouldn't actually be that bad if we had a version of MutationObserver that lets you watch for an element with a specific ID (or that matches a CSS selector). Imagine an updating version of querySelector. That's actually not that crazy of an API and could be useful for more than just perf.

That said measuring the tokenization / parsing / rendering time of every dom node might be a bit pricy.

@igrigorik
Copy link
Member Author

The API @eliperelman suggests wouldn't actually be that bad if we had a version of MutationObserver that lets you watch for an element with a specific ID (or that matches a CSS selector).

That assumes you know what to look for. If you control both the application code and the monitoring code, that works, but if you're building a generic / drop-in library to harvest perf metrics.. that doesn't work that well. This is precisely the problem that performance timeline is supposed to solve: one place for perf scripts to harvest metrics without the need to instrument every single node and operation.

@npm1
Copy link
Contributor

npm1 commented Mar 11, 2019

Measuring rendering times of elements is under the scope of ElementTiming. If other timestamps (parsed, tokenized) are shown to be important, they could be added to that API in the future. Other than that, there does not seem to be a use case for adding a standalone header for a mark. Therefore, I think we should close this issue. Thoughts?

@yoavweiss
Copy link
Contributor

Agree that this idea seems to have evolved into Element Timing and we can close this issue. @igrigorik?

@igrigorik
Copy link
Member Author

Agree that we're tackling a subset of what we discussed here in Element Timing. However...

In this case there's currently no API that provides us anything close to "when did this element come in from the network". We have to do really terrible and unreliable hacks to even get a guess at what this timing is. As charles mentions this has made some debugging problems much harder than they should be. We can provide a lot of value to developers by filling this gap.

The above use case is still not addressed unless we believe that ET would solve that too, or we no longer consider this to be a use case worth solving?

@bmaurer @n8schloss a few years later.. is this still a pain point and something you would push for?

@yoavweiss
Copy link
Contributor

Interesting. Maybe we can link from ET entries to the relevant RT entries? (or correlate them with Fetch IDs)

@tdresser
Copy link
Contributor

Long term, I think we should provide links from ElementTiming entries to relevant resource timing entries.
I don't think we should block the first version of ElementTiming on this, but we should track this in an issue on that spec.

@npm1
Copy link
Contributor

npm1 commented Mar 18, 2019

I've filed w3c/element-timing#2 so closing this issue.

@npm1 npm1 closed this as completed Mar 18, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

10 participants