How to build a simple WebRTC browser-based video application

How to build a simple WebRTC browser-based video application

How to build a simple WebRTC browser-based video application

Last Updated on May 31, 2021

Most of us are familiar with videoconferencing solutions. But sometimes, in particular for professional use cases, it could be preferable to build your own custom solution. This could be interesting if you need an application that meet particular businesses needs or if you want to integrate video communication to existing workflows. With ApiRTC you can easily build a custom video application and running it in minutes.


In this quick start article, you’ll learn how to create a simple Web browser-based video application using the ApiRTC Javascript client.

In subsequent posts, we’ll show you how to develop the same application using the Android and iOS SDKs.


This quick start assumes that 2 clients connect to same ApiRTC conversation. A conversation may be regarded as a room, in which ApiRTC clients share audio-video streams and exchange messages. In this demo, each client will publish its webcam stream and subscribe to the stream of the other participant.


Both local and remote streams will be displayed with a common picture-in-picture layout.



To complete this quick start, you’ll need:



To build this application, we’ll walk you through these different steps:

   Completing this tutorial will take you about 15 minutes. We may also browse the full source code here.

Setting up


Get your ApiRTC key


In order to connect to the ApiRTC server, the application requires an API key. To find out your API key, log into your ApiRTC account then go to the API/Credentials section. We’ll use this key later in this application.



Create the project structure


This project is a simple Web Application consisting of an HTML file, a Javascript file and a CSS file.

Before delving into the code, you need to create a project directory (apirtc for instance) for your application. This folder will eventually looks as follows:




Create the Web page


To begin with, add an index.html in the root directory of your project. We’ll use this file to load the ApiRTC library as well as our JS and CSS files and to create a basic layout for the video call.


Here is the code:


 <!DOCTYPE html>
      <title>ApiRTC Quickstart</title>
      <link rel="stylesheet" href="css/app.css"/>
      <script type="text/javascript" src=""></script>
        <div id="conference">
            <div id="remote"></div>
            <div id="local"></div>
        <script type="text/javascript" src="js/app.js"></script>


The code includes two HTML div elements which will hold the local and remote video streams.



Create a User agent


Add now a app.js file in the js folder. This file contains the code for our application.

The first step is to create a User Agent which is the ApiRTC  client that we’ll use to connect to the ApiRTC server and manage media streams.


Start by adding the following code:



var userAgent = new apiRTC.UserAgent({ uri: 'apzkey:' + API_KEY });

You need to replace YOUR_API_KEY by your ApiRTC key. Refer to the Get your API key section if you don’t already have your key available.


Create an ApiRTC conversation and publish your local video


Paste the following code below to app.js:


userAgent.register().then(function(session) {
    var conversation = session.getConversation('quickstart');

    // subscribe to remote stream

    // publish local stream

    var streamOptions = {
        constraints: {
            audio: true,
            video: true

    userAgent.createStream(streamOptions).then(function(stream) {
        stream.addInDiv('local', 'local-media', { width: '100%', height: '100%' }, true);
        conversation.join().then(function() {


The application registers your User Agent to the ApiRTC cloud system. Once you’re registered, a session is created and represents a connection of the Javascript client to the ApiRTC platform.


We’re using this session to get a reference to the quickstart conversation. A conversation is where the communication between different user agents takes place by sending and receiving media streams.


Note that our conversation is created automatically if it does not already exist.


We create then a local audio and video stream, join the conversation and finally publish our stream.


Publish means that the stream is sent to the conversation so that other participants to this conversation may watch our video stream.


We publish the stream in the completion handler of the join() function, since this requires prior connection to the conversation.


We also display our local stream on the web page by adding a video element to the div with local id. To achieve this, we are using the addInDiv function which avoids us having to manipulate the DOM directly.


This function takes 4 parameters:


  • the target DOM element that the video is added to
  • the id of the created video element
  • properties of the video element, used to specify its width and height
  • whether the audio shall be muted or not. For the local video we do not want to hear ourselves so it is set to true (i.e. muted).



Subscribe to remote stream


We’ll now display the other participant video stream so that everybody may see each other. In the app.js file, add the following code just below the // subscribe to remote stream comment.


conversation.on('streamListChanged', function(streamInfo) {
    if (streamInfo.listEventType === 'added' && streamInfo.isRemote === true) {
        conversation.subscribeToMedia(streamInfo.streamId).then(function (stream) {
            console.log('Successfully subscribed to remote stream: ', stream);
        }).catch(function (err) {
            console.error('Failed to subscribe to remote stream: ', err);

conversation.on('streamAdded', function(stream) {
    stream.addInDiv('remote', 'remote-media', { width: '100%', height: '100%' },  false);

conversation.on('streamRemoved', function(stream) {
    stream.removeFromDiv('remote', 'remote-media');


When a new stream is added or removed to the conversation, a streamListChanged event is dispatched. If a remote stream is added which means that a client publishes to the conversation, we subscribe to this stream.


Once the subscription has been performed, a streamAdded event is triggered. We’ve added a handler for this event to update the web application and display the subscribed stream.


We have also added a handler for the streamRemoved event to remove the video when the stream terminates.



Customize the CSS


Finally, we’ll add some CSS to achieve the picture-in-picture display with the remote video filling the whole browser and a small local video embedded within. Edit the app.css file and add the following code:


html {
    height: 100%;
body {
    height: 100%;
    margin: 0;

#conference {
    position: relative;
    width: 100%;
    height: 100%; 

#remote {
    position: absolute;
    left: 0;
    top: 0;
    width: 100%;
    height: 100%;
    z-index: 1;

#local {
    position: absolute;
    width: 320px;
    height: 240px;
    bottom: 20px;
    right: 20px;
    z-index: 2;


Run the application


Open a browser and load the index.html file. Your browser may probably asks you for permissions to use the webcam and microphone.


Accept and you’ll then see your local video on the bottom right part of the browser. You may want to mute your microphone or set the audio level to 0 to prevent feedback loop.


Open another browser window and load the same file. You should now see both videos. Since you’re using one webcam, local and remote videos are the same. You may want now to run this application on two computers to have two different video streams.


Congratulations, you have now reached the end of the tutorial. In the next installment of the ApiRTC developer blog, we’ll develop the same application using Ionic for iOS and Android.