-
-
Notifications
You must be signed in to change notification settings - Fork 62
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(uptime): storage configuration and and consumer #6697
Conversation
This PR has a migration; here is the generated SQL for -- start migrations
-- forward migration events_analytics_platform : 0020_uptime_monitors_init
Local op: CREATE TABLE IF NOT EXISTS uptime_monitor_checks_local (organization_id UInt64, project_id UInt64, environment LowCardinality(Nullable(String)), uptime_subscription_id UUID, uptime_check_id UUID, scheduled_check_time DateTime64(3), timestamp DateTime64(3), duration_ms UInt64, region_slug LowCardinality(String), check_status LowCardinality(String), check_status_reason LowCardinality(String), http_status_code Nullable(UInt16), trace_id UUID, retention_days UInt16) ENGINE ReplicatedReplacingMergeTree('/clickhouse/tables/events_analytics_platform/{shard}/default/uptime_monitor_checks_local', '{replica}') ORDER BY (organization_id, project_id, toDateTime(timestamp), trace_id, uptime_check_id) PARTITION BY (retention_days, toMonday(timestamp)) TTL toDateTime(timestamp) + toIntervalDay(retention_days) SETTINGS index_granularity=8192;
Distributed op: CREATE TABLE IF NOT EXISTS uptime_monitor_checks_dist (organization_id UInt64, project_id UInt64, environment LowCardinality(Nullable(String)), uptime_subscription_id UUID, uptime_check_id UUID, scheduled_check_time DateTime64(3), timestamp DateTime64(3), duration_ms UInt64, region_slug LowCardinality(String), check_status LowCardinality(String), check_status_reason LowCardinality(String), http_status_code Nullable(UInt16), trace_id UUID, retention_days UInt16) ENGINE Distributed(`cluster_one_sh`, default, uptime_monitor_checks_local, cityHash64(reinterpretAsUInt128(trace_id)));
-- end forward migration events_analytics_platform : 0020_uptime_monitors_init
-- backward migration events_analytics_platform : 0020_uptime_monitors_init
Distributed op: DROP TABLE IF EXISTS uptime_monitor_checks_dist;
Local op: DROP TABLE IF EXISTS uptime_monitor_checks_local;
-- end backward migration events_analytics_platform : 0020_uptime_monitors_init |
❌ 1 Tests Failed:
View the top 1 failed tests by shortest run time
To view more test analytics, go to the Test Analytics Dashboard |
depends on getsentry/sentry-kafka-schemas#359 being merged and us bumping the kafka schema |
needs #6717 but otherwise should be good to go |
#[serde(rename_all = "snake_case")] | ||
pub struct CheckStatusReason { | ||
/// The type of the status reason | ||
pub r#type: String, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm sure it works but damn that's ugly. What happened to naming it ty
and putting an #[serde(rename = "type")]
...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, it looks like it's a very static reason, we could benefit from having it be a 'a &str
instead of String
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it seems like i'd have to introduce a bunch of lifetime stuff in order for this to work, is this really necessary?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not that big of a deal, saves on memory. We do it a lot in Relay too.
https://github.com/getsentry/relay/blob/master/relay-server/src/services/store.rs#L1255
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
went ahead and converted all String
s to &str
s for all the strings! let me know if what i did makes sense, not super familiar with lifetimes in rust.
[ | ||
{ name: organization_id, type: UInt, args: { size: 64 } }, | ||
{ name: project_id, type: UInt, args: { size: 64 } }, | ||
{ name: environment, type: String, args: { schema_modifiers: [nullable, low_cardinality] } }, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this should not be low cardinality, it's set by a customer, you can't guarantee it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
environment is set to low cardinality on some other datasets, was just copying based on that. i can remove it if you'd like though
{ name: environment, type: String, args: { schema_modifiers: [ low_cardinality, nullable ] } }, |
Column("environment", String(Modifiers(nullable=True, low_cardinality=True))), |
snuba/clusters/storage_sets.py
Outdated
@@ -20,6 +20,7 @@ | |||
"EVENTS_ANALYTICS_PLATFORM": "events_analytics_platform", | |||
"GROUP_ATTRIBUTES": "group_attributes", | |||
"PROFILE_CHUNKS": "profile_chunks", | |||
"UPTIME_RESULTS": "uptime_monitor_checks", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You don't need this. The migration belongs to the events_analytics_platform
storage set already. That also means removing migration related changes in this PR (in group_loader.py
, groups.py
and the directory in snuba_migrations
).
I believe it hasn't been run because it's prefixed with 0020_
when the previous one is 0018_
, and yes, that's my fault since I reverted 0019_
, but as soon as you fixed that, the migration will run everywhere the events_analytics_platform
is available.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i'll update the migration in a separate PR! thanks for the callout.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i've also removed the migration related changes as specified. thanks.
name: uptime_monitor_checks | ||
storage: | ||
key: uptime_monitor_checks | ||
set_key: uptime_monitor_checks |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it should belong to the events_analytics_platform
since migrations are also link to the events_analytics_platform
storage set.
I suppose it's not a MUST but since the migrations are already in the events_analytics_platform
set, I'd keep things together.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i've gone ahead and removed the errant uptime_monitor_checks config stuff.
renames the migration as 19 was reverted in the EAP dataset. #6697 (comment)
No description provided.