Skip to main content

Automatic verification system via knowledge graph reasoning and inconsistency identification in online encyclopedia


Disinformation is manufactured with no respect for accuracy, but can circulate at an unprecedented speed and breadth. There’s an urgent need to develop automatic techniques to facilitate verification. In this project, we focus on a principled formalism and develope intelligent systems for verification and web quality evaluation. Conceptually, We propose an alternative approach based on a new knowledge graph (KG) methodology. The main challenge is to automatically constructed scientific KG, then match the structured KG with unstructured claim to determine the consistency among them. In addition, A potential issue of previous verification method is the lack of explainations for the final prediction. Our framework would be able to generate important evidential knowledge to support the judgement, which is important for human ckeckers to do further analysis and enhance the AI interpretability.

We will apply our proposed framework on multiple real-world platforms. Take encyclopedia as an example, quality of articles from encyclopedia is an outmost concern by the community and beyond. However, how to measure quality has remained to be an open question. Wikipedia, an online encyclopedia, is widely used by people from various disciplines. While there are some efforts to use Wikidata information to populate Wikipedia pages, there is not much research about the consistency of the information already existing on Wikipedia and the content on Wikidata. For example, consider the article about Chile in the English Wikipedia saying: “... It borders Peru to the north, Bolivia to the northeast, Argentina to the east, and the Drake Passage in the far south”, there we want to extract such information and compare with the Wikidata item about Chile in the property “shares borders with” to know if they are consistent with the text on the English Wikipedia. Being able to make such comparison will have important and positive effects on the quality and availability of information both in Wikidata and Wikipedia, helping to detect inconsistencies, detect missing content (in both projects) and also in sharing references.

This alignment of content will be also important to help patrollers in under resourced communities to early detect the presence of suspicious content (e.g., information introduced in wikipedia in language X, that is not consistent with the information contained in Wikidata or other Wikis), helping to fight against disinformation campaigns.

Required skills

The student is required to have learned machine learning algorithm, have experience in deep learning implementation.

Principal Investigator


Dr. Jing Ma

Dr. Jing Ma

Assistant Professor, Department of Computer Science




Dr. Yin Zhang

Dr. Yin Zhang

Assistant Professor, Department of Journalism