Visualizing RNN States with Predictive Semantic Encodings

08/01/2019
by   Lindsey Sawatzky, et al.
0

Recurrent Neural Networks are an effective and prevalent tool used to model sequential data such as natural language text. However, their deep nature and massive number of parameters pose a challenge for those intending to study precisely how they work. We present a visual technique that gives a high level intuition behind the semantics of the hidden states within Recurrent Neural Networks. This semantic encoding allows for hidden states to be compared throughout the model independent of their internal details. The proposed technique is displayed in a proof of concept visualization tool which is demonstrated to visualize the natural language processing task of language modelling.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset