2 min read.
In this article, I will introduce you to a system capable of communicating between peers using the WebRTC technology. This technology provides web browsers and mobile applications with real-time communication (RTC) via simple application programming interfaces (APIs). It allows audio and video communication to work inside web pages by allowing direct peer-to-peer communication, eliminating the need to install plugins or download native apps.
For this article, the use of external libraries or plugins is avoided in order not to worry about a package being maintained or not and also because we want to get our hands on directly to the WebRTC API. It uses the chromium API (not chrome API, chrome is a browser) in order to call the underlying logic and implementation of the WebRTC technology. I’m not a fan of chrome, so I used The brave browser which is using chromium as well.
Unfortunately, even with the improvements made in network speed, sending a stream of audio and video data over the Internet is too much to handle. This is where encoding and decoding comes in. This is the process of breaking down video frames or audio waves into smaller chunks and compressing them into a smaller size. The smaller size then makes it faster to send them across a network and decompress them on the other side. The algorithm behind this technique is typically called a codec. There are many codecs in use inside WebRTC. These include H.264, Opus, iSAC, and VP8. When two browsers speak to each other, they pick the most optimal supported codec between the two users. The browser vendors also meet regularly to decide which codecs should be supported in order for the technology to work.
The code can be found here. Once you have
downloaded the files, you can run the project using the parcel bundler. After running parcel index.html
, navigate
to http://localhost:1234
and you will be prompted to share your screen.
Once you choose what to share, a local and a remote section will show up. Both sections consist of a video element which is used to display received media. The first video element gets data from the local recorder and shows it on the screen where the second video element does a totally different job although it might seem it is the same video as the first. The second video element is the remote video which was implemented locally for educational purposes. Behind the scenes, a connection is made between peer A and peer B to be able to show the second video. That’s why the second video has a small latency, because the data is passed through the LAN.
Seeing if any WebRTC connection is active or not, is very simple.
You can navigate to chrome://webrtc-internals
or brave://webrtc-internals
if using the brave browser and it lists every connection that’s alive at the exact moment.
In this way, you can see which connection is online and this can be pretty useful to debug
your connections and ensure your software is always alive and kicking.