This is my learning notes and reference for WebRTC, by went through Avaya blogs and articles, mainly brought by @ajprokop, leading me to understand more what is the behind of WebRTC, preparing me to implement WebRTC in Avaya worlds.
I step through Writing Your First WebRTC Application: Part 1 and knows that WebRTC solution consists of two parts, web based application and signalling server.
Next, I continued the adventures, I read AN INTRODUCTION TO WEBRTC SIGNALING and learnt that WebRTC specification do not define specific signalling server protocol, that’s mean it is up to developers to use any types of signalling, someone like SIP, someone like customized. But WebRTC requires server understand SDP (Session Description Protocol), that is SIP talk also. The most common transport to communicate from web browser to signalling server is via HTML5 WebSocket, but yes you can use other methods, up to you, WebRTC client doesn’t care what method you use to transfer SDP, as long as you transfer SDP from one client to another client, and it didn’t care what info you put in SDP, so you can use own protocol you own way to encode information in SDP, no need follow SIP. Stick to WebSocket, we need a signalling server to support WebSocket, which is a full duplex communication between browser and server to browser, in order to exchange session information between two clients. To illustrate the picture of how WebRTC makes two browsers exchanging SDP through a signaling server, I excerpt a paragraph from the article:
- Andrew creates an offer that contains his local SDP.
- Andrew attaches that offer to something known as an RTCPeerConnection object.
- Andrew sends his offer to the signaling server using WebSocket. WebSocket is a protocol that provides a full-duplex communications channel over a network connection. WebRTC standardized on WebSocket as the way to send information from a web browser to the signaling server and vice versa.
- Linda receives Andrew’s offer using WebSocket.
- Linda creates an answer containing her local SDP.
- Linda attaches her answer, along with Andrew’s offer, to her own RTCPeerConnection object.
- Linda returns her answer to the signaling server using WebSocket.
- Andrew receives Linda’s offer using WebSocket.
I read Understanding WebRTC Media Connections: ICE, STUN and TURN, to know how the technology makes two devices talk to each other, if both devices sit on the behind of some firewalls. Then need to use Network Address Translation (NAT) to route the device’s private IP Address to public IP Address via a NAT device, which acts like an agent between Internet and private network. The keywords here are: Interactive Connectivity Establishment (ICE), Session Traversal Utilities for NAT (STUN), Traversal Using Relay NAT (TURN). Two pictures summarize the structure:
I briefly went through Writing Your First WebRTC Application: Part 2 and learnt we need to ensure WebRTC enabled on browsers. At the time of writing, only Chrome and Firefox had implemented WebRTC. A note, we need to explicitly enable WebRTC in Chrome Flags, while Firefox does not. Another noting that Chrome and Firefox support different naming of WebRTC functions, it is recommended to write a wrapper to suit to different API.
Next, I read through WRITING YOUR FIRST WEBRTC APPLICATION: PART THREE where I learnt how WebRTC flows followed by calling party and caller party.
1) Connect Users by way of a Signaling Server (most common use WebSocket to exchange SDP).
2) Start the signaling between the two sides.
3) Each side will exchange information about its networks and how it can be contacted. (Make use of ICE framework which use STUN/TURN to do some NAT to allows two devices resides on different private networks talk together).
5) Start RTCPeerConnection streams. (Start transmit media)
6) The RTCPeerConnection API. (the real work of establishing a peer-to-peer connection between the two web browsers occurs.) Later, the article showed how the calling and called party use the API, register the handler, to communicate with each other.
Last, I read through WRITING YOUR FIRST WEBRTC APPLICATION: PART FOUR, where there is more hands on coding on establish WebRTC in our application.
Avaya was driving WebRTC, the customer service of the future, and making it in the present. WebRTC enables browser to browser voice and video communication. But some enterprises would prefer more controlled on media, like call recording. Here Avaya comes in. Avaya wants to integrate WebRTC tap into enterprise. Some parts of the world like US and Singapore had already high speed Internet, the consumer can use WebRTC to access contact centres via Web Browser (Desktop or Mobile Phone), while other low speed Internet consumer still can use old traditional way, dial into a toll free number, navigating through IVR menu, and reach to agent. Yep, another called Visual IVR also emerged to provide another options (not replacement) over traditional voice IVR, where user presents a list of questions and answer displayed on smart phone or web (reduce the use of voice port), and ability to reach agents at the end of the session. In call centre, agent may also can use WebRTC enabled browser to take the calls. WebRTC maybe another option to traditional Avaya desk phone, so may reduce some investment for enterprise.
WebRTC via SIP is possible. Last year I just experimented another open source PBX server, Freeswitch, and I successfully make voice conference calls between my browser (via WebRTC) to my iPhone / iPad / Android Samsung S4 (via another SIP Phone Apps). So it is possible the back end linked to Avaya SBC that having SIP Trunk supports.
Some more links to dig deeper:
Avaya SBC supports ICE, STUN, TURN.
Avaya WebRTC Snap-In enables users inside or outside the Enterprise to make a secure call from their web browser to any endpoint to which Avaya Aura can deliver calls. For example, customers can call from a web browser directly into a Contact Center.