Aoik

UniMRCP plugin with WebSocket

In my scenario the UniMRCP plugin works like a pass-through proxy. It maintains a set of WebSocket connections with a backend. The actual processing is done by the backend.

The WebSocket library I use is libwebsockets.

The key to use WebSocket in a UniMRCP plugin is synchronization between the MRCP connection and the WebSocket connection. When the MRCP connection is finished, the WebSocket connection should be closed. When the WebSocket connection is disconnected abnormally due to a network failure, the MRCP connection should be closed.

The difficult point about synchronization is that there are three threads involved. One thread is UniMRCP framework's thread. One thread is libwebsockets framework's thread. One thread is a task thread. As a feature suggested by the demo plugins provided by UniMRCP, some framework events can be responded asynchronously in the task thread. The three threads communicate via message queues.

To simplify the synchronization, I first attempted opening a new WebSocket connection for each MRCP connection. This proves to be unwise because it causes port exhaustion problem under stress testing.

So I instead turn to maintain a pool of WebSocket connections that are not disconnected unless forced by network failures. And in case of network failures, auto reconnection is carried out.

In a normal situation, after a MRCP request's response has been sent to the client, the client notifies the server to close the channel. UniMRCP framework's channel close event fires and our callback registered with the framework is called. As mentioned above, the event can be responded asynchronously. We do not respond immediately, otherwise UniMRCP framework will close the MRCP connection, leaving the WebSocket connection unsynchronized. Instead, the UniMRCP plugin sends a SESSION_END message to the backend, the backend responds with a SESSION_END message. This is a "WebSocket soft close". What I didn't mention is on UniMRCP framework's channel open event, the UniMRCP plugin sends a SESSION_START message to the backend. This is a "WebSocket soft open". On receiving a SESSION_END message from the backend, the UniMRCP plugin and the backend are synchronized, we can now respond UniMRCP framework's channel close event by calling mrcp_engine_channel_close_respond.

In an abnormal situation, the WebSocket connection is disconnected due to a network failure. When libwebsockets' LWS_CALLBACK_CLIENT_CLOSED event fires, the handling code sends a SESSION_ABORT message to the task thread. The task thread responds the MRCP request with a failure status to close the MRCP connection. There is no "WebSocket soft close" because the WebSocket connection has been disconnected already. When libwebsockets' LWS_CALLBACK_WSI_DESTROY event fires, the handling code schedules a reconnection.

Gotcha 1:
As suggested by its name, mrcp_engine_channel_close_respond responds a channel close event. It can not be used to force closing a channel. Closing a channel should be done by means of sending a response to the client's request.

Gotcha 2:
A apr_pool_t pool can not be accessed from multiple threads. The correct way is to use the pool that comes with UniMRCP framework objects in framework callbacks. And create a separate pool for libwebsockets' thread and the task thread, respectively.

Gotcha 3:
Because the three threads communicate via message queues, after UniMRCP framework's channel destroy event fires and the MRCP connection is closed, there may be messages pending in the queues with dangling pointers. The solution is to add a flag indicating the MRCP connection has been closed, and to discard these flagged messages after they are dequeued. Accessing the flag should be in a critical section protected by a lock.

Prev Post:
Next Post:

Comments:

Reply to: