-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Potentially incorrect wording in the specification #44
Comments
I was told that the OpenXR API returned near and far plane as well as fov to make sure that the code that calculates the scene will use the same matrices as the system code that calculates the depth texture. Are you sure that you need to do the adjustment in that case? Are you adding the near plane distance in your shaders? |
Speaking of OpenXR, can you point me to the API or extension in OpenXR that you use for this?
If the values in the buffer can be used directly by the shader for occlusion, then I'm 99% sure that they match point 1. It'd mean that they already went through some projection matrix (and yes, if you don't know near, far, & FOV of that matrix, then there's not much you can do with the data), which means that they are going to be normalized to range If the data were returned according to point 2, then system's
I'm quite certain that if it's "distance from the camera to the environment" (option 2) and the spec says it should be "distance from view's near plane to the environment" & we decide not to change the spec, then an adjustment over the entire buffer is going to be needed, simply because those 2 things aren't the same. |
I didn't find it on the Khronos site but it is listed on ours: https://developer.oculus.com/documentation/native/android/mobile-depth/ |
I have confirmed that ARCore returns the depth data according to pt.2. Which means that we need to decide how we want to make progress here. It seems that we have 2 systems returning data in 2 different ways, and we'd like to not mandate anything that'd incur large costs on the implementers (e.g. performance impact of mandating adjusting the data in some manner). We also need to thread the needle carefully if we do not want to make a breaking change. At a minimum, I think the description of the data contained by |
Hm... this is tricky. I certainly don't want to break anyone, but I also question how many apps are already making use of these values. I think it's likely to increase with the Quest adding this functionality, but it's being exposed there in a fairly different manner, so I think we have the opportunity to make some changes to how the data is interpreted now, as anyone who's interested in expanding their existing app's compatibility will have to update their usage regardless. I'm also reluctant to enforce data transformation to a specific space. As @bialpio points out there's probably different spaces that make sense for different use cases, and if we're pushing for the system to normalize we could end up just forcing devs to undo a spec-mandated transformation because we chose the "wrong" space for their use case. In other words, if there's going to be transformations anyway, lets leave them in the hands of the person who knows best what's needed: the developer. I feel like adding the data from #43 is the ultimate solution, because then there's no ambiguity about what the range is and different systems can conform to any requirements imposed on them by their hardware/platform. If you measure from the camera? |
I agree that we don't want to tie this to a space. Quest is returning depth far/near so the author can feed that back into WebXR, not to interpret the values of the depth buffer differently.
That won't work for Quest because afaik it will always report infinity for the far plane. |
Sorry, I didn't mean to suggest "space" in the the proper WebXR sense here. I meant "depth range".
Noted. So I guess we'd have to at least enable the possibility of a far plane at infinity, unless we're really confident that's what all implementations are going to do. |
/facetoface to chat about the best way to move this issue forward. |
I think it might be too late to add it to the agenda at this point but we will take a look Monday morning to see how we can fit it in. Please remind me on Monday. |
Discussed during the F2F. Conclusions:
|
Threejs already seems to have it implemented with depthNear and depthFar used for depth. Implementation of Threejs suggest other diversions from the spec too. Like: https://immersive-web.github.io/webxr/#dictdef-xrsessioninit is not respected by the implementation . I am not sure, should the spec be updated with this change or threejs should be patched to conform with the implementaion? |
Yes, we return depthFar as infinity and had to make changes to browser to accept that value. |
SessionInit is respected by all implementations. Does Quest browser diverge? |
I was not able to locate SessionInit being used for depth-sensing option in three.js implementation. Quest browser works fine with threejs depth sample but i was not able to find where SessionInit was passed in through threejs implementation fro Quest Browser. |
When going over the spec for issue #43, I have realized that we may have a mismatch between what the specification says, and what we do in our ARCore-backed implementation in Chrome.
Namely, the spec says that in the buffer that we return, "each entry corresponding to distance from the view's near plane to the users' environment".
ARCore's documentation seems to have a conflicting phrasing:
A
on the observed real-world geometry and a 2D pointa
representing the same point in the depth image, the value given by the Depth API ata
is equal to the length ofCA
projected onto the principal axis".If ARCore returns data according to 1), then I think it'd be acceptable to leave the spec text as-is, but then our implementation may not be correct (namely, I think we may run into the same issue that causes @cabanier to need to expose at the very least the near plane distance that ARCore internally uses?).
If ARCore returns data according to 2), then the values in the buffer we return are not going to depend on the near plane. In this case, we are not going to be compliant with the spec (we don't have a distance from near plane to user's environment), and the only way to be compliant will require us to adjust each entry in the buffer - this may be expensive given that this'll happen on CPU. IMO the best way to fix this would be to change the spec prose here, but I think this may be considered a breaking change, so we'll need to discuss how to move forward.
I'm going to try to confirm with ARCore what is actually their behavior, I'm not sure if this issue is actionable until that happens.
The text was updated successfully, but these errors were encountered: