Live RTSP cameras integration in web Services for public administrations

Probably the first thing you will consider when integrating IP cameras with WebRTC applications is the codecs compatibility between endpoints.

That is why the WebRTC specification it is very clear: any WebRTC implementation must be compatible with both VP8 and H264. This means that visualizing most IP cameras shouldn’t present any problem, as most of them implement the H264 protocol…

However not all is as simple as that. Despite the theory is that you may connect a RTSP camera with a browser there are many things to take into account that will make it a bit more complicate. You may find very good explanation here: https://www.kurento.org/blog/kurento-webrtc-gateway-ip-cameras

In addition to what it is exposed in the post you may always find other problems for example related with the actual support of the H264 codec.

Context

Renacen‘s project consist in a web platform that provides access to more than a thousand IP security cameras. These cameras are geolocated and linked to a geolocated alarm system. The type, model, and properties of these cameras are very diverse and they network capabilities can be also very different. Most of the cameras are property of the final customer but there are also many of other cameras that belong to third parties and just allow Renacen’s customer to access the media flows.

The project use OpenVidu to deliver the camera flows to the users. Though OpenVidu, Kurento and most browsers offer H264 built in compatibility there were a significative number of video flows that couldn’t be played properly. As a result there were some cameras that don’t display on the web platform or that display with errors due to subtle -or not so subtle- elements which prevent the flow through Kurento -and other media servers-. For example: the media container, the codec, bandwidth or even the H624 codec.

The project

Due to the presented variety of camera types, codecs the use of OpenVidu PRO is the perfect choice, because it is capable of manage the media flows multiplexing and transcoding -to a point- with a simple integration in the predesigned software with its simple but powerful API.

However those media flows with errors or those that can’t be played need something else. This part of the project was divided in 3 steps:

  1. Media flow analysis.
  2. Tailored solution development
  3. Deploiment, integration, testing and customisation of the developed solution.

Step 1.

Objective: Obtain all the information from a wide range of video flows covering al working and not working cases.

With this intensive analysis of real media flows from the actual platform we detected that most of the errors were produced by an incompatibility of codecs. More specifically with specific variables from the codec (encapsulation, level, profile, etc.). You can consult technical details here.

Step 2.

Objectives:

  • Play all the cameras -especially those that couldn’t be played previously-.
  • Improve the video quality of those cameras that were displayed with errors -Pixeling, green frames, grey zones-
  • DO NOT apply changes or transformations to the videos that were displayed properly  -Saving as much CPU as possible for those flows-.

In order to reach these objectives is necessary think in 2 main functionalities:

  1. Media flow “discriminator” for those flows that need some transformation.
  2. Apply the specific transformations -transcoding, re-encapsulation, etc- ONLY to the incorrect flows -not playing or playing with errors-

The “Media flow Discriminator” is in charge of obtaining the actual data from the video flow and compare them with previous “knowledge” and know (i) if it is necessary any transformation and (ii) in that case what would that transformation be. Additionaly this component allows to “hard-code” what would be the behaviour when an specific media flow is checked, independent of what the “discriminator” logic dictates.

The “transformation” component receives the original flow and the transformation and its output will consist on a new media flow previously transformed. It is basically a proxy for the original media flow.

Step 3.

Objectives:

  • Deploy the solution and validate it under the expected load.
  • Validate the integration with Renacen’s patform.
  • Create the behavioural rules for each media flow parameters, for the “discriminator” to select the most appropriate action.

Our role

Our role in this project has been as media specialist:

  • Finding the main characteristics of the videos that didn’t play or played with errors and finding the most adequate transformation.
  • Studying the available technologies and tools to apply the transformations to the media flows.
  • Developing a scalable system with minimum CPU impact and minimum integration costs that allows to identify automatically if the flow should be transformed or the original flow will play without errors.
  • Setting in place a real-time transformation component ONLY for flows that are predicted not to play or to play with errors.
  • Support Renacen in the design of a general architecture and the development of the solution that uses OpenVidu PRO

Technologies

All media delivering is subject to many parameters and variables -network quality, codecs, containers, video resolution, fps, etc. -. When talking of thousand of cameras we talk at least of hundreds of possible combinations. Therefore being able to automatize and predict the behaviour of a specific video by taking into consideration these parameters and properties is of paramount importance to manage a full scale real-time video-surveillance web platform.

Conclusion and lessons learnt

This project has made us to give the best of us, to put into value all our knowledge of the media delivery and media transcoding internals. We had to analyse tens of videos to find the most relevant parameters that would make a video to be played ok, not to be played or played with errors. Furthermore we had to apply these knowledge in order to develop a custom solution that could apply the appropriate transformation to each media flow. In conclusion this has proven lot of hour of try-and-fail and lot of tests of different parameters to find the best visualisation for the videos.

In a project like this with such a tight schedule, the key factor has been finding OpenSource technologies that allowed us to move forward fast but at the same time with the guaranty of being able to audit the code of such components.

Teaming up with such an expert company in innovative developments like Renacen has been a pleasure, their knowledge of plenty of technologies and their great architecture for the platform has made the integration very easy. Also working hand by hand with their development team has made things easy for us.