Background
To create immersive interactive experiences in a virtual environment is difficult because physical constraints and haptic sensations can not be fully reproduced with today’s VR software and hardware technology. This is especially challenging when we want to have hand presence in VR while grasping and manipulating objects.
For example, when a user places the avatar hand close to an object and triggers the grasp button to initiate grasp interaction, he or she can not guarantee that the wrist is perfectly placed so that just closing fingers around the object can generate a natural-looking grasp configuration on the selected object. While in real life such a problem is trivial since we can always rely on our fast sensory-motor feedback loop to correct our hand and finger poses, in VR such a feedback does not exist.
VirtualGrasp fills in the gaps of lacking sensory-motor feedback, and uses a generative grasp synthesis algorithm to create immersive grasp interacting experiences in VR.
VG enables robust grasp interactions. Compared to many physics-based grasp synthesis solutions in the market VG takes a different approach by exploiting “object intelligence”. By analyzing shape and affordances of an object model in VR, we can synthesize grasp configurations on a hand with just the knowledge of where the wrist is, and without any dependence of expensive physical simulations. As a result,
- there is no dependency on accurate finger tracking controllers (see controllers), and
- users don’t need to spend a lot of cognitive load to carefully place the fingers around the object.
In this page we first describe the process of how VG creates object grasp interaction,
and then explain a set of parameters to configure and fine-tune the grasp interaction experiences in your VR application:
From Object Selection to Grasp Synthesis
In VR, grasp interaction consists of two consecutive processes:
- object selection, and
- grasp synthesis
VirtualGrasp provides an object selection mechanism through checking collisions between a grasp selection sphere attached to the hand and the objects, and choosing the “closest” object for grasping. Note this process is done in the VirtualGrasp library, and no collider setup or physical simulation is needed in any client engines.
And once an object is selected by a hand, it is ready for grasp synthesis.
Grasp Synthesis Method
Grasp synthesis refers to the runtime process of creating hand grasp configurations – the wrist and fingers pose w.r.t the object – when an user triggers grasp with VR controllers.
VG provides two alternative methods for grasp synthesis static grasp (SG) and dynamic grasp (DG).
Static Grasp | Dynamic Grasp | ||
---|---|---|---|
Static Grasp (SG) creates grasp configurations from one of N grasps stored in a grasp database. | Dynamic Grasp (DG) computes grasp configurations at the moment of grasp triggering. | ||
Limited number of and sparse grasps unless parameterized to be denser | ![]() |
![]() |
Infinite, flexible grasps |
No overhead during runtime (simple DB access) | ![]() |
![]() |
Some negligible overhead during runtime (generative algorithm) |
Hand-sensor-immersion breaks due to sparse grasps. | ![]() |
![]() |
Hand-sensor-immersion does not break. |
To create natural-looking grasp configurations during grasp synthesis, we need to bake the object. The baking output of objects is a grasp database which will enable DG for any humanoid hands.
In the situations when you need to use SG (see section choosing synthesis method and interaction type), grasp studio can be used to add grasps into database through DG.
Grasp Interaction Type
As we mentioned in background section, when a user triggers grasp, the wrist may not be at a good pose w.r.t. the object. VG’s grasp synthesis algorithm will “correct” this “mis-placement” of wrist, and create a grasp configuration with a wrist pose different from the sensor pose at the moment of grasp triggering. Because of this difference, there are different alternative solutions to pose the object-hand grasp ensemble, which will create different user experiences:
Interaction Type | Description | Considerations |
---|---|---|
Trigger Grasp | when user triggers grasp, hand moves to the wrist pose in the synthesized grasp configuration around the object | since hand moves away from sensor pose, this could break hand-sensor immersions |
Jump Grasp | when user triggers grasp, object jumps to the grasped position in the hand | object directly moves upon grasp triggering, which may not be suitable for performing some tasks requiring physical stability (e.g. play a Jenga game) |
Jump Primary Grasp | when user triggers grasp, object jumps to the grasped position in the hand, using the labeled primary grasp(s) in the grasp DB | using primary grasp(s) is needed particularly in situations when an object should be grasped in some particular ways (e.g. how to grasp scissors). Note: to use this interaction type, you should have added some primary grasps in to grasp DB through grasp studio. |
Preview Grasp | once user selected an object, the grasp configuration is previewed on the object, so that user can push the trigger button to pick up the object if the grasp is satisfactory | since grasp synthesis is running at every frame when object is selected, when DG is used can leads to low frame rate |
Preview Only | once user selected an object, the grasp configuration is previewed on the object, and the grasp trigger won’t take effect to pick up object | since grasp synthesis process is running at every frame when object is selected, when DG is used can leads to low frame rate |
Sticky Hand | a fall-back solution when object is not baked, so the grasp configuration is directly taken from the hand pose at the moment of grasp triggering, as if hand is stick to the object. | this allows VR developers to setup the interactive behaviors through object articulation before baking objects |
Choosing Synthesis Method and Interaction Type
As explained in the previous sections, selecting different combinations of synthesis method and interaction type will create different user experiences. Due to the nature of each option, there may be preferences of how to combine the two parameters. The table below gives some hints:
Interaction Type | Synthesis Method | Evaluation |
---|---|---|
Trigger Grasp | DG | ☑ Good since DG create grasp pose with the wrist close to sensor pose, so hand will not move so much. |
Trigger Grasp | SG | ☒ Not recommended since when there is sparse grasps in DB, hand will move far away from sensor pose, breaking the hand-sensor immersion. |
Jump Grasp | DG | ☑ Good since DG create grasp pose that is close to sensor pose, so object will not jump too much. |
Jump Grasp | SG | ☑ Ok as long as the object’s big jump is not a problem at the moment of grasping. |
☒ Not possible since primary grasp is grasp(s) in the DB which is only used during SG synthesis. | ||
Jump Primary Grasp | SG | ☑ Needed to us SG since primary grasp is grasp(s) in the DB which is only used during SG synthesis. |
Preview Grasp | DG | ☑ Good and recommend to be used in Grasp Studio when adding grasps to the DB through DG. |
Preview Grasp | SG | ☒ Not recommended since at preview phase hand will be very jumpy due to sparse grasps in the DB. |
Preview Only | DG | ☑ Good and recommend to be used in Grasp Studio when adding grasps to the DB through DG. |
Preview Only | SG | ☒ Not recommended since at preview phase hand will be very jumpy due to sparse grasps in the DB. |
Sticky Hand | – | Sticky Hand is a fall back solution when objects are not baked, so no SG or DG is relevant. |
Grasp Animation Speed and Release Animation Speed

In global grasp interaction settings, you can set the default synthesis method and interaction type for all objects in the scene globally. The other two parameters – grasp animation speed and release animation speed – also significantly affect the user experiences because they determines how fast the hand forms grasp and releases from grasp respectively.
The unit of these values are in (second), so if grasp animation speed is 0.1, it means it takes 0.1 second starting from grasp triggering for the hand to form a complete grasp configuration on the object.
If release animation speed is 0.1, it means it takes 0.1 second starting from release triggering for the hand to move from grasp configuration on the object back to its sensor pose.
Throw Velocity Scale and Throw Angular Velocity Scale
The two velocity scales allow you to scale up and down throwing power when an object is released from all grasping hands. Throw Velocity Scale is to scale how fast object translate, while throw angular velocity scale is to scale how fast object rotate.