-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Code for preprocessing Rabatel (2023) data #71
Conversation
@albangossard @JordiBolibar let me know what do you think. I can make some changes on this PR if you see it suitable |
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #71 +/- ##
==========================================
- Coverage 37.57% 35.11% -2.47%
==========================================
Files 15 17 +2
Lines 628 672 +44
==========================================
Hits 236 236
- Misses 392 436 +44 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @facusapienza21! Thanks for these changes, I started to play with ODINN today with the aim of using these velocities, so your contribution comes at just the right time!
Since I'll be off on Friday, here is a quick review, I won't have the time to play with your changes before the week-end.
Return maximum value for non-empty arrays. | ||
This is just required to compute the error in the absolute velocity. | ||
""" | ||
function max_or_empyt(A::Array) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you mean max_or_empty
?
function max_or_empyt(A::Array) | |
function max_or_empty(A::Array) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed (here and in the other lines of the script too)
Thank you @albangossard for the feedback. No rush, we can merge this PR early next week. Thank you for the suggestions, I will work on them! |
date1_offset_since = ncgetatt(file, "date1", "units")[12:21] # e.g., "2015-07-30" | ||
date2_offset_since = ncgetatt(file, "date2", "units")[12:21] # e.g., "2015-08-29" | ||
# Convertion to Julia datetime | ||
date_mean_offset = datetime2julian(DateTime(date_mean_since)) - 2400000.5 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we still need to convert to Julian time? This was used before when we had to interact with Python dates, but this should no longer be the case, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure what you are referring @JordiBolibar , but this transformation is required since that is the original format of the dataset. Days are count from customize starting dates, so this is necessary to set all in the same time reference.
Anyways... this is just for the intepolated dataset, I would not pay much attention to this for now.
vx_error = ncread(file, "error_vx") | ||
vy_error = ncread(file, "error_vy") | ||
# Absolute error uncertanty using propagation of uncertanties | ||
vx_ratio_max = map(i -> max_or_empty(abs.(vx[:,:,i][vabs[:,:,i] .> 0.0]) ./ vabs[:,:,i][vabs[:,:,i] .> 0.0]), 1:size(vx)[3]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this computationally expensive? If so, we could try to implement this in a pmap
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could, but I will suggest with integrate this PR as it is and we do this in the future maybe. Also, vabs it's not really required for some calculations, so we may even added as an optional value we compute inside the data object.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added an optional variable to determine if to compute or not this variable
|
||
Important remarks: | ||
- Projections in longitude and latitude assume we are working in the north hemisphere. | ||
If working with south hemisphere glaciers, this needs to be changed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we could add an assert on the value of lat
to make sure that the user doesn't use data from the southern hemisphere, don't you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
mmm my understanding is that the projection is defined with the zone and the hemisphere information. A priori, you don't have the latitude, you need to compute it with the projection knowing the hemisphere. Maybe I am wrong, but I don't think there is big danger in leaving it as it is.
@albangossard @JordiBolibar feel free to merge this PR |
No description provided.