Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Agenda for May 31, 2024 #665

Closed
nairnandu opened this issue May 29, 2024 · 2 comments
Closed

Agenda for May 31, 2024 #665

nairnandu opened this issue May 29, 2024 · 2 comments
Labels
agenda Agenda item for the next meeting

Comments

@nairnandu
Copy link
Contributor

Here is the proposed agenda for the meeting on May 31st, 2024

@nairnandu nairnandu added the agenda Agenda item for the next meeting label May 29, 2024
@bkardell
Copy link
Contributor

Can we add #663 as well?

@nairnandu
Copy link
Contributor Author

Attendees: @gsnedders, @meyerweb, @jgraham, @nairnandu, @bkardell, @nsull, @foolip, @dandclark

Brief notes from the meeting:

  • Continue review of the 2025 proposal selection process (2025 proposal selection process #657 and here)
    • bkardell: agree on the general direction, but there are some specific questions on the timing for prioritization. As an example, for cases where there is a public position from one organization, do we move forward with it?
    • foolip: the idea of having a champion is a good one. However, the restriction on face-to-face time for championing proposals should be discussed further. A possible alternative is to do some of that work offline and discuss disagreements in the meeting. 3 prioritization buckets sounds reasonable
    • nsull: like the idea of having a smaller/simpler set of priority signals. Preference for async vs in-person is a cultural thing and would prefer face-to-face for the most important discussion topics
    • jgraham: the time for presenting support does have to be limited. For cases where there is not enough public data to confirm support, it might be worth discussing it live
    • dandclark: ideally we want to be spending time talking about proposals that are in the margin. Logistically, it would be good to have some signaling ahead of time.
    • jgraham: if there is some overlap on proposals that organizations want to champion, then that would be the signal.
    • foolip: on championing - how do we see that playing out?
    • jgraham: everybody comes up with their list for championing. If more than one party does champion a proposal, we can sort out who would do that.
    • bkardell: have advocated splitting the proposals into groups and have organizations decide on which group to champion
    • nairnandu: what is the next step here? Should we try to have a dry run? One of the ideas proposed earlier was to have some reference proposals that we have consensus on.
    • jgraham: next step would be to write this up as a PR and ask for feedback. Not sure if a dry run would help here.
    • bkardell: most of the process should be things that we are familiar with. The question always is why we are picking certain things
    • jgraham: one thing we can do in the interim, start to gather a list of data points we want to collect for Interop proposals
    • nsull: yes, did bring that up in the previous meeting. As an example - survey data, developer requests, bug stars etc.
    • foolip: +1 on putting that on a brainstorming session
    • bkardell: +1. It would be great if we can share those signals and talk about it
    • nsull: synchronous discussion would be preferable for this
    • jgraham: This would give us enough time ahead of the call for proposals
    • Next steps: 1) PR for Mozilla’s proposal and 2) brainstorming session on developer, user and compatibility signals
  • Re-scoring previous test runs causes confusion #356
    • jgraham: daily score and historic scores - what did the dashboard show on a given day. The next step here would be a note to Daniel on how we can incorporate that into the dashboard. We can do the back-end work, but it would require some front-end work.
    • Next step - nairnandu will follow-up with Daniel
  • Document the data collection and results generation system #357
    • jgraham: We would need a volunteer
    • foolip: happy to create a bullet list of items to show the workflow
    • jgraham: its for people to understand why something is broken
    • meyerweb: happy to do an editorial pass at this
    • Next step: foolip will author a PR
  • Consider some improvements to WPT dash Consider some improvements to WPT dash #663
    • bkardell: an extra column or a view to show how many tests pass in browsers. This should help us uncover areas where we can come together and agree on being a priority
    • foolip: like the universally passing metric and is complementary to BSF.
    • jgraham: our PoV is that BSF did not work the way it was intended. The change in the no: of interoperable tests over time could be something we could discuss in detail.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
agenda Agenda item for the next meeting
Projects
None yet
Development

No branches or pull requests

2 participants