Bin Wang

alt text 

Scientist
Institute for Infocomm Research (I\(^2\)R), A*STAR, Singapore
Address: 1 Fusionopolis Way, Level 12, Connexis South Tower, Singapore 138632
Personal Email: bwang28c [@] gmail.com
Work Email: wang_bin / [@] i2r.a-star.edu.sg
[Google Scholar]    [GitHub]    [HuggingFace]    [LinkedIn]    [Twitter]

News

About Me

I am a scientist at Aural & Language Intelligence Department, I2R, A*STAR. Before joing that, I was a research fellow at National University of Singapore (NUS) working with Prof. Haizhou Li from 2021-2023. I received my Ph.D. degree from University of Southern California (USC) supervised by Prof. C.-C. Jay Kuo in 2021. My bachelor's degree is obtained from University of Electronic Science and Technology of China (UESTC) in 2017.

Some of the topics that I am currently researching include:

  1. Make LLM can hear - AudioLLM - Audio-Based Large Language Models

    1. What techniques can be used to effectively integrate audio processing capabilities into existing LLM architectures?

    2. What is the most efficient approach for achieving seamless cross-modality integration?

    3. What benchmarks can be designed to accurately evaluate the real-world performance of AudioLLMs?

    4. Current Outcomes: MERaLiON-AduioLLM, AudioBench, Awesome-Audio-LLM, MoWE-Audio

  2. Multilingal and Multicultual LLM

    1. What unique properties should a multilingual LLM possess to cater to diverse languages effectively?

    2. How can multilingual learning be made more efficient and effective, especially for low-resource languages?

    3. What internal mechanisms can ensure robust multilingual knowledge alignment within the model?

    4. Current Outcomes: SeaEval, CRAFT, CrossIn, SEACrowd

  3. Conversional AI

    1. Representation Learning for Retrieval-Augmented Generation, Knowledge Graphs

    2. What representation and coordination strategies can enhance multi-agent communication in shared environments?

    3. What methods can enable conversational agents to effectively reason and plan based on learned or provided world models?

    4. Current Outcomes: Representation Learning, Commonsense Knowledge Graph

Opportunities

We are actively looking for candidates working on Multimodal LLMs (text, audio, vision, etc.).

  • Research Interns (6 months or above preferred),

    • SIPGA for international (master / undergraduate) students.

    • ARIA for Singaporean undergraduate students.

    • ARAP for international Ph.D. students attachment for 1-2 years.

    • Local students from NUS, NTU, SMU, SUTD, SIT, Polytechnic etc. please contact me directly for attachment to projects.

  • Ph.D. Students

  • Long-term Positions

    • Research Engineer / Scientist (Both engineering and research background are preferred.)

Some Publications

  1. Bin Wang, Xunlong Zou, Geyu Lin, Shuo Sun, Zhuohan Liu, Wenyu Zhang, Zhengyuan Liu, AiTi Aw, Nancy F. Chen. “AudioBench: A Universal Benchmark for Audio Large Language Models.” NAACL, 2025. [paper], [code]

  2. Bin Wang, Zhengyuan Liu, Xin Huang, Fangkai Jiao, Yang Ding, AiTi Aw, Nancy F. Chen. “SeaEval for Multilingual Foundation Models: From Cross-Lingual Alignment to Cultural Reasoning.” NAACL, 2024. [paper], [code]

  3. Bin Wang, Chen Zhang, Yan Zhang, Yiming Chen and Haizhou Li. “Analyzing and Evaluating Faithfulness in Dialogue Summarization.” EMNLP, 2022. [paper], [code]

  4. Bin Wang, C.-C. Jay Kuo, and Haizhou Li. “Just Rank: Rethinking Evaluation with Word and Sentence Similarities.” ACL, 2022. [paper], [code]

  5. Bin Wang, Guangtao Wang, Jing Huang, Jiaxuan You, Jure Leskovec, and C.-C. Jay Kuo. “Inductive learning on commonsense knowledge graph completion.” IJCNN, 2021. [paper], [code]

  6. Bin Wang, and C.-C. Jay Kuo. “SBERT-WK: A sentence embedding method by dissecting bert-based word models.” IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2020. [paper], [code]

  7. Bin Wang*, Angela Wang*, Fenxiao Chen, Yuncheng Wang, and C.-C. Jay Kuo. “Evaluating word embedding models: methods and experimental results.” APSIPA transactions on signal and information processing, 2019. [paper], [code]

Full list of publications.