You may wonder “why not a standalone server?”. That's a fair question.
Being a Node.js module, mediasoup is easily integrable within larger Node.js applications. Consider that mediasoup just handles the media plane (audio/video streams) so the application needs some kind of signaling mechanism. Having both the signaling and the media handlers working together makes the application architecture easier.
All those using others languages/platforms for the signaling plane would need to develop their own communication channel with a standalone Node.js server running mediasoup (or wait for somebody to do it).
Not exactly. Native addons are Node.js extensions written in C/C++ that can be loaded using
require() as if they were ordinary Node.js modules.
Instead, mediasoup launches a set of C++ child processes (media workers) and communicates with them by means of inter-process communication. This approach leads to a media worker design not tiled to the internals of Node.js or V8 (which change in every new release).
That's a wrong question. mediasoup does not provide any network signaling protocol to communicate with endpoints/browsers. It just handles the media layer and also provides a set of messages (mediasoup protocol) generated by both mediasoup-client (client side) and mediasoup (server side) that must be exchanged between both.
It's up to the application developer to build his preferred signaling protocol to carry those messages.
Yes, check the examples.
No. All the peers in a room should support a common subset of audio and video codecs. Said that, WebRTC defines a list of MTI (“mandatory to implement”) audio/video codecs, so in a world of happy unicorns this topic should not be a problem.