-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: changed start and end time logic for consumer lag details #6605
Conversation
Build Error! No Linked Issue found. Please link an issue or mention it in the body using #<issue_id> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 Looks good to me! Reviewed everything up to 72252cd in 13 seconds
More details
- Looked at
62
lines of code in2
files - Skipped
0
files when reviewing. - Skipped posting
3
drafted comments based on config settings.
1. frontend/src/pages/MessagingQueues/MessagingQueuesUtils.ts:317
- Draft comment:
Ensure that the change from nanoseconds to milliseconds is consistent across all related functions and API calls to prevent any potential time-related bugs. - Reason this comment was not posted:
Confidence changes required:50%
The change from nanoseconds to milliseconds in thegetMetaDataAndAPIPerView
function is consistent with the change in thegetStartAndEndTimesInMilliseconds
function. This ensures that the time values are correctly handled in milliseconds throughout the code.
2. frontend/src/pages/MessagingQueues/MessagingQueuesUtils.ts:184
- Draft comment:
The conversion of timestamp to milliseconds by multiplying by 1000 is correct for consistency with other parts of the code that expect milliseconds. - Reason this comment was not posted:
Confidence changes required:50%
The change in thegetStartAndEndTimesInMilliseconds
function to multiply the timestamp by 1000 is correct for converting seconds to milliseconds. This aligns with the change in thegetMetaDataAndAPIPerView
function where the start and end times are now multiplied by 1e6 instead of 1e9.
3. frontend/src/pages/MessagingQueues/MessagingQueuesUtils.ts:314
- Draft comment:
The functiongetAttributeDataFromOnboardingStatus
should not be part of the ClickHouseReader interface as it is not related to ClickHouse. Access it through the DAO in the telemetry instance instead. - Reason this comment was not posted:
Comment was not on a valid diff hunk.
Workflow ID: wflow_NYbuGPSpm9318sWf
You can customize Ellipsis with 👍 / 👎 feedback, review rules, user-specific overrides, quiet
mode, and more.
Build Error! No Linked Issue found. Please link an issue or mention it in the body using #<issue_id> |
1 similar comment
Build Error! No Linked Issue found. Please link an issue or mention it in the body using #<issue_id> |
Build Error! No Linked Issue found. Please link an issue or mention it in the body using #<issue_id> |
1 similar comment
Build Error! No Linked Issue found. Please link an issue or mention it in the body using #<issue_id> |
When clicking on any point in the graph, the graph essentially shows results for the current aggregation interval. |
yes this is a valid point I was trying to think in that direction but I don't have much context on aggregation intervals. |
this is what I was thinking https://github.com/SigNoz/engineering-pod/issues/2010#issuecomment-2527254975 |
yes so again choosing based on points wouldn't be the correct thing to do. the aggregation interval is calculated on the query service end and even frontend can do the same calculation if required. so any dot corresponds to previous data not the future data. So the logic should be |
but these metrics aggregation windows doesn't correlates to spans, if the aggregation window is very small we may not see any spans, in those cases hardcoded values makes more sense. |
@shivanshuraj1333 - should I change the logic or not here, according to the above discussion? |
are these metrics derived from the spans or exported directly ? |
Metrics are collected separately from kafka brokers directly (code ref: https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/receiver/kafkametricsreceiver/consumer_scraper.go#L152), spans are from SDK/Agents from the client side. There's no native correlation between them (wrt timestamps). |
IMO since there is no exact co-relation between them, is it correct to co-relate them on the basis of timestamp as not all the spans would be contributors to those peak metrics right ? |
at atomic level (here partition), we can do timestamp based correlation per partition, to approximate spans contributing to metric for that time period. |
eg
|
@SagarRajput-7 let's get this one merged |
Sure, @shivanshuraj1333 @vikrantgupta25 please review |
Summary
Updated the startime and endtime logic for the APIs under consumer lag details
Reasons for this is that we get related producer span prior to the point in time also, so here we are creating a better timeframe
Related Issues / PR's
Screenshots
Screen.Recording.2024-12-09.at.5.57.49.PM.mov
Affected Areas and Manually Tested Areas
Tested all the related APIs' effects on the demo.in env
Important
Update start/end time logic and default tab for consumer lag details in messaging queues.
getStartAndEndTimesInMilliseconds()
inMessagingQueuesUtils.ts
to calculate start and end times around a point in time, converting timestamp to milliseconds.MessagingQueuesDetails
inMQDetails.tsx
fromConsumerDetails
toProducerDetails
.getMetaDataAndAPIPerView()
inMessagingQueuesUtils.ts
to use microseconds forstart
andend
intableApiPayload
forconsumerLag
view.MessagingQueueServiceDetailType
inMQServiceDetailTypePerView
inMQDetails.tsx
forconsumerLag
andpartitionLatency
views.This description was created by for 72252cd. It will automatically update as commits are pushed.