Machine learning of tongue features and multimodal approaches for health measurements
Background and significance of the project
Tongue diagnosis is an important part of Chinese medicine diagnosis and tongue image data and its association with health and diseases are important aspects of Chinese medicine data. The proposed project will use machine learning technologies to extract and quantify tongue features and their associated symptoms/biomarkers, so as to establish the first digital tongue database in Chinese Medicine. At the same time, clinical data associated with tongue images will be analysed by machine learning to create multi-modal algorithms for assessment of changes in health and diseases. Currently we are investigating the digital characteristics of tongue features associated with diabetes and acute respiratory inflammation (including COVID-19 and non-COVID-19). The results from these studies will help to produce multimodal algorithms for non-invasive diabetes detection and for prediction of suboptimal health changes, which can be used in tele-healthcare apps, such i-heals.com, a web app prototype created by the research team with funding from Cyberport Creative Micro Fund.
Research plan and methodology
This project draws on the expertise of a multidisciplinary team of scholars from Chinese medicine (Dr. Zhang Shi Ping), computer science (Dr. Lan Liang) and mathematics (Prof. Chiu Chung Nok). To build up the tongue image database, we have been collecting tongue images of different health conditions in collaboration with hospital and clinics in Hong Kong and in Mainland, using our validated smartphone tongue imaging technique. We have collaboration with Queen Elizabeth Hospital, the largest hospital in Hong Kong, to collect data from the diabetic clinic. We already have a collection of high quality smartphone tongue images in our database for feature analysis. To extract and quantify tongue features, we have developed tongue segmentation algorithms that can automatically extract tongue images from facial images and divide them into sections for feature extraction and quantification. Once the digital tongue features are established and quantifiable, their association with symptoms and biomarkers will be analysed. To create multi-modal algorithms for prediction of changes in health and diseases, we will correlate quantified tongue features with symptoms and biomarkers using regression models. In practice, this will be done using machine learning.
Basic programming skills are essential, and experiences in machine learning and image analysis are definitely advantageous.