Towards Robust Reasoning over Knowledge Graphs

10/27/2021
by   Zhaohan Xi, et al.
0

Answering complex logical queries over large-scale knowledge graphs (KGs) represents an important artificial intelligence task, entailing a range of applications. Recently, knowledge representation learning (KRL) has emerged as the state-of-the-art approach, wherein KG entities and the query are embedded into a latent space such that entities that answer the query are embedded close to the query. Yet, despite its surging popularity, the potential security risks of KRL are largely unexplored, which is concerning, given the increasing use of such capabilities in security-critical domains (e.g., cyber-security and healthcare). This work represents a solid initial step towards bridging this gap. We systematize the potential security threats to KRL according to the underlying attack vectors (e.g., knowledge poisoning and query perturbation) and the adversary's background knowledge. More importantly, we present ROAR(Reasoning Over Adversarial Representations), a new class of attacks that instantiate a variety of such threats. We demonstrate the practicality of ROAR in two representative use cases (i.e., cyber-threat hunting and drug repurposing). For instance, ROAR attains over 99 intelligence engine to give pre-defined answers for target queries, yet without any impact on non-target ones. Further, we discuss potential countermeasures against ROAR, including filtering of poisoning facts and robust training with adversarial queries, which leads to several promising research directions.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset