I guess this is less a specific question about BEPUphysics and more a general question about integrating two independent systems.
Right now I have a renderer that I'm pretty happy with. Like BEPUphysics it has its own Space data structure which contains RenderableEntities that can be drawn. My question is this: what is the best way to integrate the BEPUphysics Space with my Renderer Space? It seems like it could get bizarre quite quickly having two different types of entity in two different spaces that are only contingently connected, but I don't know how I could relate the two spaces in a more rigid way without either modifying the source code of BEPUphysics or my renderer.
How should this be handled?
BEPUphysics Integration
Re: BEPUphysics Integration
One common approach is to keep the rendering and physics components as fully separate modules, and then have some other object which actually uses those modules. In other words, the isolated functionality of physics, rendering, and whatever other systems are composed to form a full game entity.
For example, a game character would handle a reference to its rendered object, a reference to the physics object (probably a CharacterController), and maybe some link to an AI system if it's an NPC. The game character would act as a sort of glue between the systems. It could take the results of the physics simulation to update the state of the renderer, request information from the AI subsystem to decide what to tell the physical CharacterController to do next, and so on.
Keeping each bit of core functionality decoupled like this frees up their individual designs and simplifies the interface needed to access them. In contrast, if e.g. rendering and physics have to be aware of each other, the communication surface can get very large and unwieldy.
For example, a game character would handle a reference to its rendered object, a reference to the physics object (probably a CharacterController), and maybe some link to an AI system if it's an NPC. The game character would act as a sort of glue between the systems. It could take the results of the physics simulation to update the state of the renderer, request information from the AI subsystem to decide what to tell the physical CharacterController to do next, and so on.
Keeping each bit of core functionality decoupled like this frees up their individual designs and simplifies the interface needed to access them. In contrast, if e.g. rendering and physics have to be aware of each other, the communication surface can get very large and unwieldy.