-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Questions related to a mesh object's position (how to getPosition, getRotation) and selecting them with a trackpad or touchscreen #455
Comments
See
|
Thank you very much. I'll try your suggestions. :-) |
@dmurdoch, I've done more reading on matrix transformations: at your convenience would you clarify if I understand the difference between the rgl package capability and a 3D engine such as Unity or Unreal that: Unity and Unreal engines allow the programmer to read the values of the transformed matrices that result from applying transformation to meshes but rgl does not (and that is why in Unity or Unreal the programmer can use functions to "read" what the positions, scale or rotation are of their mesh objects in world space but the programmer cannot do that with rgl)? Or am I misunderstanding the concept described in this post: https://gamedev.stackexchange.com/questions/189896/how-mesh-transformation-works-under-the-hood Or, does rgl.attrib(3dmesh, "vertices") actually retrieve transformed positions of vertices if I done some transformation matrix commands on them? If so, then I can understand your previous explanation how it just might take a little longer (than if had one command such as in Unity to obtain a mesh's worldspace position) to calculate the centroid of the transformed vertices revealed in rgl.attrib function? I don't mean to request a lengthy explanation and understand if too advanced for a novice 3D programmer like myself but was hoping I might be on the right track in understanding what I'd need to do if I wanted to try and do some more advanced 3D programming with rgl (for instance, creating scenes with multiple moving objects and writing code to store the locations of those objects, writing simple physics functions for how those moving objects might interact, etc.). I understand such ideas were not likely the goal of the 3d functionality of the rgl package but was curious as I suspect some other users of the package might be on far one could go with rgl and 3D programming with R. Thank you. |
I'm not familiar with Unity or Unreal. In The other way to transform it is to apply the transformation to the scene or subscene where the object was plotted. Then it appears to move, but You say that those engines allow you to retrieve the transformation of an object. That's not possible in If you want to duplicate the programming model that you describe, then the idea would be to put each collection of rgl objects that form a single solid real-world object into a separate subscene, and manipulate the matrices associated with that subscene to move the objects. That's how |
Thank you very much, @dmurdoch-- I think I'm beginning to understand rgl better and how I might track/find what I'm looking for (mesh positions) and meanwhile learning more about matrix transformations for Open GL. I ended up following what you suggested with only a slight modification and will relate back to 3D game engine terminology that I'm more used to: It seems the two different ways of transforming a mesh type object in rgl are:
mesh <- translate3d(cube3d(color = "green", tag = "greenBox"), 0, 0, -2) will translate the green cube mesh 2 units down the z axis (into the screen) with the result that when render in rgl shade3d(mesh) the green cube mesh will be position 2 units down the z axis in the "world space" rgl scene
par3d(userMatrix <- translationMatrix(0, 0, -2)) will then translate the green mesh 2 units down the z axis after having already been situation in the rgl "world space" scene despite the end result looking the same on the screen, I can tell that the transformation process was different by:
matrixSequence("greenBox") which nicely shows me three matrices: "Id": the x,y,z coordinates of the green cube mesh in its "local space" coordinates "Matrix" which appears to be the product of all the transformation matrices there were applied to the green cube mesh's local space coordinates to generate the final green cube mesh's coordinates in the rgl "world space" scene. (It appears there is a default transformation matrix assigned to the rgl scene's "userMatrix" that looks like it helps position any objects rendered in an rgl scene for better viewing [appears to slightly rotate the object placed at the 0,0,0 center of the rgl scene "world space") "Transformed" the resultant x,y,z coordinates of the green cube mesh's vertices in the rgl "world space" scene after the "userMatrix" transformations noted above have been applied to the vertices positions. I mention all of the above because one of my original questions was whether or not a programmer could determine the location of a(each) mesh object in the rgl scene world space. Based on the above, I believe a programmer could determine the location of any mesh object in the rgl scene by doing:
Progammatically it looks like one could also retrieve those values but it looks a bit more complicated as it seems those values would be stored as follows: matrixSequence("tagOfMeshObjectInterestedIn")[[1]]$userMatrix[["7"]][#,4] where # = 1,2, or 3 corresponding to the x,y,z position coordinates of the mesh object in rgl "world space". (Obtaining the mesh object's rotation could also likely be obtained from different cells of that userMatrix mentioned above but would likely be more complicated as one would need to convert the values from radians back to degrees and noting that the values stored in those cells are either sin or cos of the different axis rotations) I found that if I get lost tracking multiple transformations that I've applied to a mesh object, if I saved the default userMatrix transformation that I mentioned earlier by executing:
I also realize as you mentioned that if I wanted to track the location (e.g. movement) of multiple 3D meshes in an rgl "world scene", since one can only apply world space transformation on mesh objects by using the par3d(userMatrix...) function which only works on a scene and not a specific mesh object per se, it would be best to set up each mesh separate in a rgl subscene and then specify each subscene when using par3d() to track each specific mesh object's position. With the insight you've provided and what I've learned so far, I'll keep exploring the 3D "engine" capabilities of rgl. Hopefully the information contained in this issue might be helpful to other rgl users with similar interests :-) |
It sounds as though you've got the ideas now. The only part that might be incorrect is point 4: the entries in a 4x4 transformation matrix can be hard to interpret when translations and rotations are combined, and really hard if you allow any other kinds of transformations. I think what you say applies to the case where there are translations and no other changes. And be careful about using |
Thanks, @dmurdoch. Perhaps then the most accurate method to determine a mesh's current position in world space is what you initially mentioned by determining the centroid of the mesh by its current vertices and such vertices are listed in the "Transformed" vertices displayed using matrixSequence(meshTag) listed under my previous 3rd point. Overall, I'll conclude that while it may be possible to determine at least a mesh's world space position in an rgl scene, it is likely much more complex to do so than a programmer would be used to doing so who is used to working with dedicated 3D engines like Unity or Unreal. Have a good weekend :-) |
P.S. @dmurdoch, I think I also found two clarification points that might be helpful to point out in the documentation for other users:
shade3d(cube3d(color = rainbow(6), meshColor = "faces")) However, the non-transposed versions of the above statements do not correctly set up their intended transformations I read that one might need to transpose the matrices since open GL is a "column major" order for matrix multiplications but that the transformation matrices stored in R are actually stored as "row major" order matrices If one then wanted to correctly position a mesh object at 2 units deep to the screen and view the mesh object rotated -70 degrees one would then combine the transformed matrices like so in the recommended scale then rotate then translate order: par3d(userMatrix = t(translationMatrix(0,0,-2) %*% t(rotationMatrix(1.22, 1, 0, 0)))
Camera angle The x axis goes from left to right. *Are the axis orientations instead supposed to say: The x axis goes from left to right. I presumed so if rgl follows open GL conventions. Fyi, Have a good rest of your weekend. |
Regarding the initial orientation: The docs are correct. Run this code and you'll see it:
That plots a white sphere at the origin, and the others one unit out on the axes: red is X, green is Y, and blue is Z. The reason for this choice is that For the other points about the matrices, you should generally multiply All of these issues are confusing because of the various different conventions used in different contexts. We've made an effort to keep |
Thanks, again, @dmurdoch -- I think I almost got it! So, by statistical conventions, the plot3d() function: + x is to the right, +y is into the plane of the screen, + z is up However, it seems that at least when not specifically "plotting" 3d points but when using transformational matrices to move mesh objects, rgl goes back to using open GL naming conventions to the axes with + x right, + y up, + z coming out from the screen as demonstrated by the following code:
Perhaps the above is another example of what you were talking about in terms of different contexts; the example you provided applied to plotting data on a 3D graph, the example I provided relates to the context of transforming mesh objects? |
In your case you are overriding the |
Thanks, again, @dmurdoch. It sounds like the preferred method is to modify the userMatrix rather than overriding it. At your convenience, would you provide an example of how one would modify the userMatrix to translate a mesh object position rather than overriding it like I did? I couldn't seem to find an example of how to do that in the rgl examples. I think once I understand that, I'll have completed my understanding of the preferred method for how to get and modify mesh positions which will hopefully can be useful for others who are learning how to use the rgl package and can close the issue. |
Here are two examples. This one moves the cube (and everything else in the display) by 0.5 units to the right in the viewer's coordinates:
In these coordinates, X runs from left to right, Y goes up, and Z goes into the screen. This second plot does it in the frame of reference of the cube, where the x-axis runs from the cyan side to the green side. It'll look similar, but if you manually rotate the cube just after plotting it, you'll see the difference:
In the second display the y-axis direction is from blue to yellow (i.e. into the screen initially), and the z-axis is from red to magenta (i.e. up). |
Interesting, that was helpful, @dmurdoch. I was able to tell the difference better if I changed the y parameter in the translation matrix to 0.5 examples above: the first example worked like I had been doing and moves the cube up; when used in your second example it moves the cube into the screen. Unfortunately, I don't have any formal training in linear algebra so have been learning all matrix algebra online. From what I read, your two examples make sense that they are different since matrix multiplication is not communicative and the order one therefore multiples the matrices gives different results. I've been starting with the userMatrix for my transformations as you've illustrated but had chosen to specify the default user matrix manually to help me understand better what is going on in the transformations (i.e. userMatrix = par3d("userMatrix") = rotationMatrix(1.22, 0, 0, 0) it seems rgl default) I was choosing to do the matrix multiplications in the order suggested by the open GL tutorials I was reading: I also read that by open GL conventions, one should aim to transform 3D objects in the following matrix multiplication order: However, from your examples, and since you mentioned statistical 3D plotting convention indicates x to be horizontal axis, y to run in and out of the viewing plane and z to be vertical axis, for my future transformation with transformation matrices in rgl, it seems best to do transformational matrix multiplication in the following order: 3DobjMatrixToModify %*% transformationMatrixToApply Looking more over the available functions in rgl, I think it might be easier to stick with using the following transformation functions instead of trying to set up lower level transformation matrix multiplications: scale3d(obj, x, y, z) instead of using par(3d) then could also more easily select individual objects in a scene to transform (e.g. move across the screen). However, unlike transforming the userMatrix in the par3d() function, I believe I would need to pop3d("3dObjectToMove") in order to erase the rendered mesh as previously drawn and then shade3d() the newly transformed (e.g. translated/moved) mesh. Thanks again for all of your time and effort to try and explain these aspects of the rgl package. The only trouble I have with that more seemingly simple method is that I can't seem to get the translate3d() function to work (scale3d() and rotate3d() work as expected and follow the +x horiz , + y in/out plane, +z vertical axis conventions):
I don't see any translation of the cube with any of those above commands. Perhaps I'm misunderstanding something about how to use them? |
No, the problem is that If you want to do experiments with this, one way is to set up a few extreme points that never move, and then move objects around within those limits. The bounding box won't change, so |
Got it, @dmurdoch, thanks! I didn't see that note about the re-adjusting to center of the bounding box under the transforming commands but see that it is under the informational section of how rgl renders under the par3d() information. So, it seems that if the user wants full independence to translate mesh objects, it may be best to use the par3d(userMatrix = transformationMatrixUserCreates) method (or else note that translate3d() will try and re-center an object in the viewport). Perhaps in a future iteration of rgl you could consider making it an option to set that "center viewport to center of bounding box" to off (keeping default being TRUE to center)? The translate3d() commands seems otherwise much simpler to use for programming and similar to commands used in other 3D engines. I think I now have answers to all of my questions on how to use rgl to manipulate 3D meshes and hope the lengthy discussion and examples may be useful for other users with similar questions. Please feel free to close this issue unless you had additional suggestions or points on this issue you thought were important. Have a good rest of your weekend and upcoming week. |
Great to see how far the rgl package has come since I last used it and now with the additional features from the rgl2gltf package!
To use R and rgl more though as a 3D graphics engine, I'm trying to figure out how to do the following related to mesh object(s):
rgl.attrib(meshObject, "vertices")
to retrieve their position, but I'd like to be able to keep track of the meshObject as a whole, especially if I want to move an object around using:
translate3d(meshObject, x, y, z)
Similar to the above question, is there a way to get a mesh object's rotation (especially if I want to keep track of how rotated the meshObject is)?
Is there a way to use either a trackpad or a touchscreen to accomplish something similar to the select3d() function?
I'd like to be able to use such a function in a loop to to detect user selecting a meshObject in the scene (i.e. to detect that a user touched on windowSpace that corresponds to volume enclosed inside of a meshObject)
Thanks!
Have a good week :-)
The text was updated successfully, but these errors were encountered: