... | @@ -638,6 +638,48 @@ Running costs: |
... | @@ -638,6 +638,48 @@ Running costs: |
|
* **Total**: 4 hours per month, doubled to 8 hours for safety, +€54
|
|
* **Total**: 4 hours per month, doubled to 8 hours for safety, +€54
|
|
per month
|
|
per month
|
|
|
|
|
|
|
|
## Why and what is a SFU
|
|
|
|
|
|
|
|
Note that, below, "SFU" means "Selective Forwarding Unit", a way to
|
|
|
|
scale out WebRTC deployments. To quote [this introduction](https://trueconf.com/blog/wiki/sfu):
|
|
|
|
|
|
|
|
> SFU architecture advantages
|
|
|
|
> - Since there is only one outgoing stream, the client does not need a wide outgoing channel.
|
|
|
|
> - The incoming connection is not established directly to each participant, but to the media server.
|
|
|
|
> - SFU architecture is less demanding to the server resources as compared to other video conferencing architectures.
|
|
|
|
|
|
|
|
And [this comment I made](https://gitlab.torproject.org/tpo/tpa/team/-/issues/41059#note_3124947)
|
|
|
|
|
|
|
|
> I think SFUs are particularly important for us because of our
|
|
|
|
> distributed nature...
|
|
|
|
>
|
|
|
|
> In a single server architecture, everyone connects to the same
|
|
|
|
> server. So if that server is in, say, Europe, things are fine if
|
|
|
|
> everyone on the call is in Europe, but once one person joins from the US
|
|
|
|
> or South America, *they* have a huge latency cost involved with that
|
|
|
|
> connection. And that scales badly: every additional user far away is
|
|
|
|
> going to add latency to the call. This can be particularly acute if
|
|
|
|
> *everyone* is on the wrong continent in the call, naturally.
|
|
|
|
>
|
|
|
|
> In a SFU architecture, instead of everyone connecting to the same
|
|
|
|
> central host, you connect to the host nearest you, and so does everyone
|
|
|
|
> else near you. This makes it so people close to you have much lower
|
|
|
|
> latency. People farther away have higher latency, but that's something
|
|
|
|
> we can't work around without fixing the laws of physics anyways.
|
|
|
|
>
|
|
|
|
> But it also improves latency even for those farther away users because
|
|
|
|
> instead of N streams traveling across the atlantic, you multiplex that
|
|
|
|
> one stream into a single one that travels between the two SFU servers.
|
|
|
|
> That reduces latency and improves performance as well.
|
|
|
|
>
|
|
|
|
> Obviously, this scales better as you add more local instances,
|
|
|
|
> distributed to wherever people are.
|
|
|
|
|
|
|
|
Note that determining if a (say) Jitsi instance supports SFU is not
|
|
|
|
trivial. The *frontend* might be a single machine, but it's the
|
|
|
|
[videobridge](https://github.com/jitsi/jitsi-videobridge) backend that is distributed, see the [architecture
|
|
|
|
docs](https://jitsi.github.io/handbook/docs/devops-guide/devops-guide-scalable/#architecture-single-jitsi-meet-multiple-videobridges) for more information.
|
|
|
|
|
|
## Alternatives considered
|
|
## Alternatives considered
|
|
|
|
|
|
### mumble
|
|
### mumble
|
... | | ... | |