Skip to main content
news

‘There Is No Wall’: Professor Michael Ng on the fundamental Mathematics of discovery

BACK

Professor Michael Ng, Dean of Science and Chair Professor in both Mathematics and Data Science at Hong Kong Baptist University, has recently received two of the highest honours in academia. He has been named a “Highly Cited Researcher 2025” by Clarivate, placing his publications within the top 1% most cited worldwide in the “Mathematics” category.

 

His project, “Theoretical Methods for Conceptual Representation and Interpretation of High-Dimensional Data”, received the Second-Class Natural Science Award under the Beijing Science and Technology Awards, recognising its significant contribution to fundamental research (see link here).

 

1

Professor Michael Ng is named a “Highly Cited Researcher 2025” by Clarivate and receives the Second-Class Natural Science Award under the Beijing Science and Technology Awards. 

 

These honours join an already distinguished portfolio , which includes being elected a Fellow of both the American Mathematical Society (AMS) and the Society for Industrial and Applied Mathematics (SIAM), featuring among the world’s top 2% most-cited scientists, and receiving the 12th Feng Kang Prize for Scientific Computing.

 

In this interview, the researcher and scientist—who bridges Mathematics and Computer Science with an eye for elegance and scalability—shares his cohesive research vision, moving from specific tools to a universal framework for scientific discovery.

 

 

Q: You combine strong mathematics with computing to handle very large, complex datasets. What is the key benefit of building general, foundational tools that work across many problems, instead of custom models for each application? Could you give an example from your imaging or genomics work?

 

A: Large and complex datasets, despite varying data sources, often share common data structures. Their data-processing tasks may differ greatly, but their underlying mathematical logic is often consistent.

 

For me, true power comes from a deep application of mathematics. When the tools you build are truly rooted in a solid theoretical foundation, the methods you ultimately obtain are not limited to a specific problem; they can be applied to an entire class of problems, even those you have not encountered before. That is the beauty of mathematics: it provides guarantees of correctness and scalability.

 

A clear example from our work is a family of matrix factorization techniques. These techniques are originally rooted in polyhedral theory and convex geometry for hyperspectral unmixing in remote sensing. Because the core methods rely on universal mathematical properties, with only minor pre-processing adjustments, these methods can work effectively in genomics, like detecting rare malignant clones across TCGA cancer cohorts and resolving fine-grained cell types in single-cell RNA-sequencing data. This transfer across imaging and biology was possible precisely because the models are grounded in fundamental mathematics rather than domain-specific assumptions.

 

2

 

Q: Looking ahead, where do you think matrix and tensor methods (numerical linear algebra and tensor computations) will have the biggest impact? And how is that vision shaping the projects you are pursuing right now?

 

There is no wall between “AI for science” and “science for AI”

Professor Michael Ng 

Dean of Science

A: Consider any frontier AI model: its weights are extremely low-rank (that is why LoRA [Low-Rank Adaptation of Large Language Models] works so well), its attention consists of optimised tensor contractions, and the whole network functions as a sophisticated linear algebra engine running on low-dimensional manifolds.

 

That is exciting—everywhere we look, nature and our largest models are both projecting massive complexity onto surprisingly low-dimensional linear-algebraic structures. The underlying connection is not superficial—it is fundamental.

 

There is no wall between “AI for science” and “science for AI”. Mathematics bridges this gap, allowing insights to flow freely between the two. This belief forms the entire development blueprint for my project over the next five years.

 

Q: Your work focuses on algorithms that are both robust and scalable, even for ill-posed inverse problems (where data are noisy or incomplete and solutions may not be unique). How do you balance formal guarantees—like proofs of convergence and stability—with the practical heuristics and tweaks needed to make these methods work on messy, real-world data?

 

A: I do not see it as a trade-off—it is a feedback loop.

 

Most often, we start with a theoretical guarantee under a clean model (convergence, stability constants, exact recovery thresholds). That proof tells us exactly where the method is safe and which assumptions are load-bearing. Real-world data inevitably violate some of those assumptions, but because we know the theory inside out, we can relax or patch the right parts deliberately: a warm start here, adaptive regularisation there, outlier pre-screening when the noise model is heavier-tailed.

 

When we face messy real-world data, we never abandon the theory. Instead, we use the analysis to guide every practical decision—from pre-processing and initialisation to parameter schedules and stopping criteria. Because we know precisely how each relaxation or perturbation affects the bounds, we can keep the core algorithm faithful to the proof while making only the minimal, theoretically justified adjustments needed for robustness. The resulting methods can maintain provable predictability even on noisy, incomplete, and large-scale problems, which is why we and our collaborators are confident in the results.

 

Click here for Professor Ng’s publications and projects: https://scholars.hkbu.edu.hk/en/persons/kwok-po-ng/