Best way to integrate BEPUphysics with other engines?

Discuss any questions about BEPUphysics or problems encountered.
Post Reply
schizoidman
Posts: 1
Joined: Thu Sep 05, 2013 6:28 pm

Best way to integrate BEPUphysics with other engines?

Post by schizoidman »

I've been experimenting with BEPUphysics for a while now using the BEPUphysics drawer, and I'm just about ready to try it in a real game. What's the best way to integrate it with rendering and logic code? As I understand it, BEPUphysics has it's own scene graph (the Space class). My renderer also has its own scene graph, as does the game itself. My first naive attempt to bridge the three different worlds involved having the following code inside each game Entity that needed physics and rendering:

Code: Select all

void Update()
{
    this.Position = physicsEntity.position;
    renderableEntity.position = this.position;
}
This is pretty ugly. I guess my two main questions are:
1. It seems to make the actual game world entity more or less irrelevant. Does my game need to have its own scene, or should I let BEPUphysics handle that? Does BEPUphysics support game logic?
2. How should I structure the code that renders physics entities?
Norbo
Site Admin
Posts: 4929
Joined: Tue Jul 04, 2006 4:45 am

Re: Best way to integrate BEPUphysics with other engines?

Post by Norbo »

As I understand it, BEPUphysics has it's own scene graph (the Space class).
BEPUphysics does not really have a 'scene graph' as the term is usually used. The API just deals with a flat set of objects. There's a bunch of objects tossed into the space, and the space figures out how to deal with them. No hierarchical representation is exposed nor is it guaranteed to be used by the internal implementation.
1. It seems to make the actual game world entity more or less irrelevant. Does my game need to have its own scene, or should I let BEPUphysics handle that?
In my own projects, I favor a flat model for game objects over a scene graph. Most spatial relationships can be efficiently dealt with by spatial queries to the physics engine (e.g. Space.RayCast, Space.BroadPhase.QueryAccelerator.GetEntries)) rather than consulting a separate game logic structure. For elements of the game design which require a separate strong structure, it's often better to have a specialized structure than a general scene graph anyway.

A simple pass-through game logic object which does nothing but set its position would indeed be mostly pointless. In this context, game logic objects would be there to do the stuff that isn't mere position/orientation fiddling. For example, the physics engine is utterly ignorant of graphical effects, but maybe you want some effects driven by game logic that follow a physics object around. The game logic object could then be responsible for scooting those effects around appropriately relative to the simulation and other input.

Does BEPUphysics support game logic?
Sort of. It gives you an interface that you can use in your game logic. For example:

1) Spatial queries. Ray casts for bullets, spherical volume queries for explosions...

2) Collision events.

3) State scanning. For example, instead of using an event, you could scan an entity's collision pairs and contacts to check for a particular collision state. This is sometimes a more natural access pattern than events since you have more explicit control over when and where the data is managed.

4) Attaching tags to entities and collidables. When a spatial query is performed, the result is a Collidable or BroadPhaseEntry as opposed to an entity directly (because not every object is an entity). Getting the Entity associated with a Collidable is doable (casting to an EntityCollidable and checking the Entity property), though it's often more convenient just to store a game logic object in the collidable's Tag property for more direct access to whatever information you want. Note that Entity and BroadPhaseEntry both have their own Tag property. Setting an Entity's Tag property is distinct from setting an Entity.CollisionInformation.Tag property, because the Entity.CollisionInformation property returns the Collidable proxy associated with the entity.
2. How should I structure the code that renders physics entities?
A renderer primarily cares about the simulated position and orientation of an object, so the interaction between physics and rendering is very restricted. Generally, grabbing each entity's Position/Orientation or WorldTransform is all that's needed. Any approach used to tell the renderer that information is usually sufficient and acceptable.

I usually don't give the renderer a lot of logic to handle. For example, the fact that a renderable is driven by a physics object is unknown to the renderer. It's just another object with a transform. Similar to the physics API, I usually give the renderer a fairly restricted interface and let it do any specialized heavy lifting behind the scenes (culling and so on).
Post Reply