First, just wanted to say thanks again for the fantastic library; it's really been invaluable for me and a blast to work with.
Glad it's working for ya
At this point, I certainly wouldn't rule out bugs in my networking or sync code, but any help about the general approach I should be taking would be much appreciated. Am I way off-base with how I'm handling things?
I'm not too familiar with the way Unity's character controller is exposed- I assume it has some sort of per-character update that is explicitly called and which executes outside of the context of the physics engine. So, if I'm understanding the context correctly, the question is how to do something like the per-character simulation in bepuphysics, where characters are physical objects and work within the physics update.
The most direct approach would be to store the full dynamic simulation state plus all input states and then, upon receiving a correction, to rewind and replay the
entire simulation. Potentially resimulating ~10 timesteps every frame is pretty expensive, of course, but on the upside:
-all pending corrections received by the client can be batched together in a single replay,
-for any isolated characters, it should produce equivalent behavior to per-character updates,
-since the replay simultaneously considers all objects, interacting characters produce fairly reasonable results, and
-any other dynamic bodies are handled in a unified way.
To avoid the need to resimulate everything while still using the same networking model, you could create dedicated isolated simulations for each of the characters. That would cut out the cost of simulating other characters or dynamic bodies, but it could end up actually being slower due to the cost of updating the broadphase for all the statics redundantly. To make it worthwhile, you might have to manually manage which static objects exist in the simulation, or make sure all of the statics are in a single static group so that there's essentially no broadphase work. Even if it ended up being faster, there's a pretty big complexity penalty, and you lose the ability to handle physical interactions easily.
In either case, for the tiny simulations which map well to this kind of networking model, you may find that using a single thread will be faster than multithreading due to the overhead of repeated fork/joins.
At a higher level, if your target game doesn't require the properties provided by rewind-replay prediction, there are some other options that open the door to more demanding simulations.
One popular and simple option is clientside character authority. You get a highly responsive result that requires no replaying at the cost of adding an easy way to cheat. Also, as with any immediate client response approach, interactions between client-authority and server-authority objects can be a bit sloppy. A fast paced, physicsy coop game would be a good fit for this, not so much CSGO or quake. I used this in a prototype some years ago with decent results.
Lately, though, I've come to prefer a much simpler brute force approach whenever possible. Everything is server authority, no clientside
physical prediction based on immediate input at all- the client physics state only starts to change after client receives a response from the server. In other words, the client is just simulating the same information as the server did, delayed by transmission time. This allows extremely stable physical player interaction since everyone is viewing the same reality.
Obviously, having to wait on roundtrip latency to move would be terrible for a fast paced shooter, but I've found it to work remarkably well for more deliberate games where characters have 'realistic' acceleration (e.g. merely as fast as Usain Bolt) and more weighty animation styles. Nonphysical things like interface, certain sounds, and camera movement can still be predicted, so it can feel pretty responsive. The extreme simplicity and physical coherence of this approach is really, really nice to work with- if you could do it in single player, there's basically nothing stopping you from doing it in multiplayer with this kind of model.
I should mention that I'm using v1 currently, but am planning to move to v2 eventually. If things would be different there, that would also be good to know. Thanks!
Only two notable things come to mind:
1) v2 is hugely faster, so replaying is a lot less concerning. Even having hundreds of dynamic bodies would be fine while resimulating 10+ timesteps per frame.
2) v2 does not yet have a character controller, and I'm not sure when I'm going to add it or what it will look like. I might just build a simple one that matches the basic features of v1's HorizontalMotionConstraint as an example and leave out the much more complex stepping logic. Or maybe an example showing a capsule-on-a-stick implementation. (It's not clear that the character controller I end up using for my own projects will be generalizable- it could end up heavily reliant on the details of the animation system.)