A few basic questions

Discuss any questions about BEPUphysics or problems encountered.
Post Reply
needshelprendering
Posts: 25
Joined: Mon Sep 17, 2012 1:17 am

A few basic questions

Post by needshelprendering »

Hi Norbo, I have a few questions, mainly around working with animations.

For a little context, here is my setup:
I create a character controller, and use that to move around. My AI will also use character controllers, is that pretty much the most efficient path?
I create my character, from a model, he is kinematic, inside the character controller.

Now, when I play animations, will the Bepu entity change with this? If so, would it be possible to turn off self-collision? My models are fairly high poly.
Second, I researched it a little bit, but I will just ask, how would I scale a ConvexHullShape? Here is how the ConvexHullShape is made with XNA Final Engine:

Code: Select all

        /// <summary>
        /// Creates and assign a kinematic entity usign the model stored in the model filter component.
        /// </summary>
        public void CreateKinematicEntityFromModelFilter()
        {
            ModelFilter modelFilter = ((GameObject3D)Owner).ModelFilter;
            if (modelFilter != null && modelFilter.Model != null)
            {
                ConvexHullShape shape = new ConvexHullShape(modelFilter.Model.Vertices);
                Entity = new Entity(shape);
            }
            else
            {
                throw new InvalidOperationException("Rigid Body: Model filter or model not present.");
            }
        } // CreateKinematicEntityFromModelFilter
Third, would it be better to use bounding boxes or the actual model for bullet collisions? I am just performing a raycast for most of my weapons. But I am going to make some slow projectiles and grenades use simple shapes.
Fourth and final, is it possible to blend animations and inverse kinematics from a general perspective. I assume it would just be blending the bone locations between the animations and the inverse transforms?

Thanks a lot for helping me out so much.
Norbo
Site Admin
Posts: 4929
Joined: Tue Jul 04, 2006 4:45 am

Re: A few basic questions

Post by Norbo »

I create a character controller, and use that to move around. My AI will also use character controllers, is that pretty much the most efficient path?
If character-like behavior is desired, then yes, character controllers are the way to go.
Now, when I play animations, will the Bepu entity change with this?
Nope, not automatically. BEPUphysics has absolutely no awareness of graphics. If you wanted the collision shape to change to match new animation state, the collision shape must be changed.
..would it be possible to turn off self-collision?
Single objects never collide with themselves. Collision rules can be used to filter out collisions between different objects.
Second, I researched it a little bit, but I will just ask, how would I scale a ConvexHullShape? Here is how the ConvexHullShape is made with XNA Final Engine:
Scaling the points used to create the ConvexHullShape would work. Another option would be to put the ConvexHullShape into a TransformableShape, though this adds a bit of overhead compared to just baking the scaling into the points directly.
Third, would it be better to use bounding boxes or the actual model for bullet collisions? I am just performing a raycast for most of my weapons. But I am going to make some slow projectiles and grenades use simple shapes.
Approximations are almost always preferred. A limited set of BoxShapes (or similar simple primitives) moved around to match the latest bone state would be a lot faster in every way than trying to update/test a whole mesh.

You may also find that having all of the constituent body parts actually in the Space is unnecessary. For example, if the CharacterController's Body cylinder is hit, proceed to test a set of simple shapes representing the animation state for intersection directly. Since the shapes aren't actually in the Space, the broad phase doesn't have to do as much work and things will run faster. This could be important if you have a large number of characters. Additionally, you could avoid performing shape transforms on any frame where the character's Body cylinder isn't hit. That could be a big performance win.
Fourth and final, is it possible to blend animations and inverse kinematics from a general perspective. I assume it would just be blending the bone locations between the animations and the inverse transforms?
It is possible, and there exists more than a single unique way to do it. The 'correct' approach depends on what kind of behavior you want and the system configuration.
needshelprendering
Posts: 25
Joined: Mon Sep 17, 2012 1:17 am

Re: A few basic questions

Post by needshelprendering »

Nope, not automatically. BEPUphysics has absolutely no awareness of graphics. If you wanted the collision shape to change to match new animation state, the collision shape must be changed.
Does this mean the entity is based off of the first position of the model? For example, I have a model that has baked animations but is in a T pose.
It is possible, and there exists more than a single unique way to do it. The 'correct' approach depends on what kind of behavior you want and the system configuration.
Well, I am going to be using it for mainly ragdolls and foot/limb placement. What would you generally use for a situation like this? I was thinking of just anchoring it to the bones of the feet and hands, and when they collide, moving the model's bone transform to the Kinematic's position. If you know of a better way to do this, I'm open to hear it. Thanks.
Norbo
Site Admin
Posts: 4929
Joined: Tue Jul 04, 2006 4:45 am

Re: A few basic questions

Post by Norbo »

Does this mean the entity is based off of the first position of the model? For example, I have a model that has baked animations but is in a T pose.
It is based on whatever data is actually used to construct the entity. Since the engine has no awareness of graphics, the choice of which state to use is made externally- the engine just receives a bunch of points (and/or indices) regardless.
Well, I am going to be using it for mainly ragdolls and foot/limb placement. What would you generally use for a situation like this? I was thinking of just anchoring it to the bones of the feet and hands, and when they collide, moving the model's bone transform to the Kinematic's position. If you know of a better way to do this, I'm open to hear it. Thanks.
Sorry, I'm not clear enough on the context to provide a specific suggestion about IK.

It doesn't sound like inverse kinematics is really relevant to the ragdoll case, though. If you want to blend an animation with a physically simulated ragdoll, there's no IK involved, just blending of simulation with animation. A common approach is to blend the local transforms of parents and their children between the simulation state and the animation state according to some parameter 0 to 1. (For example, at 0, the relative transforms from parent to child are exactly those specified by the animation. At 1, the transforms are exactly those specified by the simulation.)

To avoid any visual separation caused by physical separation in the ragdoll, the relative orientation is often the only simulation transform used. The bones are simply assumed to be perfectly connected. This causes a disconnect between visuals and simulation, but it's temporary and usually a lot better than the alternative.
needshelprendering
Posts: 25
Joined: Mon Sep 17, 2012 1:17 am

Re: A few basic questions

Post by needshelprendering »

It is based on whatever data is actually used to construct the entity. Since the engine has no awareness of graphics, the choice of which state to use is made externally- the engine just receives a bunch of points (and/or indices) regardless.
Okay, great.
Sorry, I'm not clear enough on the context to provide a specific suggestion about IK.

It doesn't sound like inverse kinematics is really relevant to the ragdoll case, though. If you want to blend an animation with a physically simulated ragdoll, there's no IK involved, just blending of simulation with animation. A common approach is to blend the local transforms of parents and their children between the simulation state and the animation state according to some parameter 0 to 1. (For example, at 0, the relative transforms from parent to child are exactly those specified by the animation. At 1, the transforms are exactly those specified by the simulation.)

To avoid any visual separation caused by physical separation in the ragdoll, the relative orientation is often the only simulation transform used. The bones are simply assumed to be perfectly connected. This causes a disconnect between visuals and simulation, but it's temporary and usually a lot better than the alternative.
Alright, I'll have to go digging into my animation system. Thanks.
Post Reply