Haojian Huang is a master student at the University of Hong Kong. He previously interned at TeleAI, where he was mentored by Professor Xuelong Li and Associate Professor Mulin Chen. His researh encompasses Trusted AI, Personalized AGI and Video Understanding. Currently, he is preparing to further focus on Embodied AI, with the goal of creating more engaging and reliable solutions for human well-being and animal welfare. Additionally, he leads CareerSynapse, a dynamic, student-driven business group that fosters creativity and practical applications using large language models (LLMs). He welcomes ANY AI research collaboration and is actively seeking research intern and Ph.D opportunities. Feel free to get in touch with him at haojianhuang927 AT gmail.com.
Evidential Deep Learning for Robust Video Temporal Grounding
Multi-modal Fine-grained CLIP for Dynamic Facial Expression Recognition with AdaptERs
Evidential Deep Learning for Robust Video Temporal Grounding
Cross-modal Resonance through Evidential Deep Learning for Enhanced Zero-Shot Learning
Trusted Unified Feature-Neighborhood Dynamics for Multi-View Classification
Evidential Deep Partial Multi-View Classification With Discount Fusion
Towards Robust Uncertainty-Aware Incomplete Multi-View Classification
Multi-modal Fine-grained CLIP for Dynamic Facial Expression Recognition with AdaptERs
Recent Trends of Multimodal Affective Computing: A Survey from NLP Perspective
3D Human Virtual Try-ON via Multi-Stage Gaussian Splatting Editing with Image Prompting