VStreamDRLS: Dynamic Graph Representation Learning with Self-Attention for Enterprise Distributed Video Streaming Solutions
Live video streaming has become a mainstay as a standard communication solution for several enterprises worldwide. To efficiently stream high-quality live video content to a large amount of offices, companies employ distributed video streaming solutions which rely on prior knowledge of the underlying evolving enterprise network. However, such networks are highly complex and dynamic. Hence, to optimally coordinate the live video distribution, the available network capacity between viewers has to be accurately predicted. In this paper we propose a graph representation learning technique on weighted and dynamic graphs to predict the network capacity, that is the weights of connections/links between viewers/nodes. We propose VStreamDRLS, a graph neural network architecture with a self-attention mechanism to capture the evolution of the graph structure of live video streaming events. VStreamDRLS employs the graph convolutional network (GCN) model over the duration of a live video streaming event and introduces a self-attention mechanism to evolve the GCN parameters. In doing so, our model focuses on the GCN weights that are relevant to the evolution of the graph and generate the node representation, accordingly. We evaluate our proposed approach on the link prediction task on two real-world datasets, generated by enterprise live video streaming events. The duration of each event lasted an hour. The experimental results demonstrate the effectiveness of VStreamDRLS when compared with state-of-the-art strategies. Our evaluation datasets and implementation are publicly available at https://github.com/stefanosantaris/vstreamdrls
READ FULL TEXT