-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Declarative marks #17
Comments
Love the idea. Just for clarification though; it seems your intent is to capture the timestamp when the UA parses the declaration, or something similar. It makes me wonder if this would ease other use cases by specifying that the timestamp should instead be captured once it's in the DOM/painted. <div id="navbar">
<!-- maybe default is parsed? etc. -->
<user-timing mark="navigationLoaded" capture="injected" />
</div> Thoughts? |
And then you could also use an element to capture the mark automatically when content is injected into the DOM: // maybe the default is "injected" for JS API element, "HTMLUserTimingElement"
let el = document.createElement('user-timing');
el.mark = 'fullyLoaded';
document.body.appendChild(el); |
I agree with @eliperelman that tokenization times may not be of high interest, but added-to-dom times, as well as paint times of nodes impacting the UI, would be. One more thing we discussed during the F2F is how such declarative marks can be used to get more meaningful meaningful-first-paint. So, maybe a |
I prefer the idea of an attribute rather than an element. In addition, if you attach the attribute to a section, you could differentiate between the start of the parsing and its end (if there is a use case for it). |
Depends on who you ask. The origin of this discussion is to answer exactly that -- e.g. to understand when the mark was scanned by preload scanner, because the parser is blocked. |
Yes, which is why I think leaving that determination to the creator of the mark is the most reasonable. Scanned/parsed could be the default, with a way to override for paint, add, etc. |
What are the use cases stated for knowing the tokenization times? Note that implementation changes (e.g. moving tokenization from bg thread to the UI thread) may regress such times while having a positive impact on real life performance. I clearly see a use case for nodes added to the DOM and/or painted to the screen as that has direct impact on users. I also see a use case for measuring when were requests issued, but that's already covered by ResourceTiming. I'm not sure what other user-impacting metrics would tokenization time give developers. |
You're streaming the HTML response and you want to know when some chunk of bytes are first processed by the browser -- e.g. if scanned by preload scanner, then it's that moment. If memory serves, both Ben (FB) and @bluesmoon have been asking for this for a while.
That's an interesting idea. /cc @panicker as this kinda overlaps with the ~hero exploration...
(vigorous hand waving here...) parse mark is emitted when div declaration is scanned; tokenize is emitted when div element is appended to the dom; render when it's visible. Or some such? =/ |
Also, how does this relate to when a MutationObserver fires if it were monitoring the DOM that contains the node? |
@igrigorik your implicit mark creation for elements seems intriguing and I'm wonder if that is the root feature we are trying to achieve. Basically, what is the timing information for a particular DOM node. Having access to the timing information for a node seems to make having a mark name for it obsolete, as once you can identify a node, the name is really just a side effect of needing to identify something in userland. <div id="timed" user-timeable>...</div> const div = document.getElementById('timed');
DOMHighResTimeStamp div.timing.parsed
DOMHighResTimeStamp div.timing.tokenized
DOMHighResTimeStamp div.timing.rendered So as long as you can already identify an element you are tracking as user-timeable, semantics around mark names seem kinda pointless. Thoughts? |
IMO, exposing tokenization times increases the risk of developers measuring implementation details, and preventing implementations from improving in cases where those "implementation detail" metrics would regress, even if the user experience would improve. So, given that risk, I'd love to hear from @bluesmoon and Ben why they want such a metric, and why they think it's valuable. |
When I last chatted with Ben I got the impression that the current situation today is so bad, literally any more information we can give which helps relay the state of the browser makes perf analytics easier for FB. For FB, there were a few special cases in which Chrome was scanning preloads, but their fetches were being delayed (up to 150ms in some cases) due to the main thread being pegged by extension content scripts on desktop platforms. This was a terrible user experience that confused them and they had to run trace analysis and reach out to us to continue investigating / improve things on Chrome's side. This is a case where scan time would have helped find the problem earlier (they get it server side rather than having to reproduce and analyze locally). I agree exposing this in general seems dangerous though. Parse time and render time have many more use cases imo. |
For us, this is a means to measure when interesting things have happened as
a result of a SPA. At the moment we use MutationObserver to check for DOM
changes, but there is some overhead involved. If user timing marks were
inserted and those showed up in Performance Observer, it would make
identifying and measuring these interesting changes easier.
It's never used as a metric on its own, but as part of a larger set of
metrics (ie, think responseStart as part of performance timeline)... I
guess what we really want is NavigationTiming for SPAs.
Philip
|
I think we should focus on providing developers "user-oriented" metrics, and render time of critical elements definitely falls here. |
I'd rephrase this and say that we should focus on APIs that help developers measure things that are currently difficult or impossible to measure. i.e., we should prioritize APIs by the value they provide over what currently exists. Understanding user-oriented metrics is equally as important as understanding the things that contribute to those metrics. If you perf tool tells you you suck but it can't help figure out why then it's not providing value. In this case there's currently no API that provides us anything close to "when did this element come in from the network". We have to do really terrible and unreliable hacks to even get a guess at what this timing is. As charles mentions this has made some debugging problems much harder than they should be. We can provide a lot of value to developers by filling this gap.
I agree that the concept of "tokenization" time is low level and may be implementation specific. What we really care about is "when did these bytes of data get to the browser". Our server sends multiple http level chunks to the browser as they become ready and we want to figure out when they arrive. Ideally I'd actually want this to be more of a network layer measurement but Http doesn't have a great way of naming/annotating chunks. That said, knowing implementation specific times (Like tokenization, parse) can be very helpful. Even if they are different across platforms they can fill in a lot of context when you're trying to understand what happened based on client side data. Developers sophisticated enough to use these also have the time to learn the differences in implementation detail. |
I could buy that. However, I think this also conflicts with what @bluesmoon is describing.. Ideally an analytics script should not have to setup a document-wide MutationObserver and interrogate each and every node; there is a lot of value in surfacing marks via PerfObserver (async notifications, out of critical path, etc). |
The API @eliperelman suggests wouldn't actually be that bad if we had a version of MutationObserver that lets you watch for an element with a specific ID (or that matches a CSS selector). Imagine an updating version of querySelector. That's actually not that crazy of an API and could be useful for more than just perf. That said measuring the tokenization / parsing / rendering time of every dom node might be a bit pricy. |
That assumes you know what to look for. If you control both the application code and the monitoring code, that works, but if you're building a generic / drop-in library to harvest perf metrics.. that doesn't work that well. This is precisely the problem that performance timeline is supposed to solve: one place for perf scripts to harvest metrics without the need to instrument every single node and operation. |
Measuring rendering times of elements is under the scope of ElementTiming. If other timestamps (parsed, tokenized) are shown to be important, they could be added to that API in the future. Other than that, there does not seem to be a use case for adding a standalone header for a mark. Therefore, I think we should close this issue. Thoughts? |
Agree that this idea seems to have evolved into Element Timing and we can close this issue. @igrigorik? |
Agree that we're tackling a subset of what we discussed here in Element Timing. However...
The above use case is still not addressed unless we believe that ET would solve that too, or we no longer consider this to be a use case worth solving? @bmaurer @n8schloss a few years later.. is this still a pain point and something you would push for? |
Interesting. Maybe we can link from ET entries to the relevant RT entries? (or correlate them with Fetch IDs) |
Long term, I think we should provide links from ElementTiming entries to relevant resource timing entries. |
I've filed w3c/element-timing#2 so closing this issue. |
Today, to emit a user timing mark or measure you need to execute JS, however JS may blocked due to pending stylesheets, other blocking scripts, etc. It would be nice if there was a declarative mechanism to register a mark and/or measure via markup. E.g...
We've had similar discussions before around exposing timing of each HTTP chunk.. but that could be excessive and its hard to reason about (there is no way easy way to correlate a particular chunk to something you care about). On the other hand, exposing a declarative mechanism would allow applications to "mark" the significant points in the response stream and correlate them against other metrics.
The text was updated successfully, but these errors were encountered: