Can Cui (崔璨)

Research Scientist in VLA
Bosch LogoBosch Center for Artificial Intelligence (BCAI)
Purdue LogoPh.D. Purdue University
Email: cancui19 [at] gmail [dot] com
Office: 384 Santa Trinita Ave, Sunnyvale, CA 94085


I am a Research Scientist in Vision-Language Action (VLA) Models at the Bosch Logo Bosch Center for Artificial Intelligence (BCAI), developing foundation models for personalized, interpretable, and safe autonomous driving. My prior industry experience includes serving as an AI Research Intern at Toyota LogoToyota InfoTech Labs and as a Controls Research Intern at Cummins LogoCummins Inc., developing control algorithms and digital twin validation for physical systems. I earned my Ph.D. from Purdue Logo Purdue University, advised by Dr. Ziran Wang, where my research focused on human–autonomy teaming, multimodal perception, and digital twin–based validation for autonomous vehicles. My work spans LLMs/VLMs, VLA, human-autonomy teaming, generative motion planning, control, and data-driven autonomy.

I am actively seeking a tenure-track faculty position in autonomous systems and transportation AI, with a focus on foundation models, human-autonomy teaming, embodied AI and real-world deployment for intelligent mobility.

News

Jan 5, 2026 I start working for Bosch Logo Bosch Center for Artificial Intelligence (BCAI) as a Research Scientist in Vision-Language Action (VLA) Models!🎉
Dec 1, 2025 I successfully defend my Ph.D. dissertation on “Foundation Models for Human-Autonomy Teaming in Autonomous Vehicles”! 🎉
Oct 8, 2025 I will serve as the Guest Editor of the JCAV Focus Issue on Large Language and Vision Models for Connected and Automated Vehicles!🎉
Sep 10, 2025 One paper is accepted in 2025 ENMLP!</a>! 🎉
Jan 17, 2025 I will serve as the General Chair of the CVPR 2025 Workshop on Distillation of Foundation Models for Autonomous Driving. See you in Nashville!🎉

Selected Publications Google Scholar Citations Badge

* indicates equal contributions.
Please check my Google Scholar for the complete list of publications.
  1. P-IEEE
    LLM4AD: Large Language Models for Autonomous Driving – Concept, Review, Benchmark, Experiments, and Future Trends
    In Proceedings of the IEEE (P-IEEE) , 2025
  2. EMNLP
    DASR: Distributed Adaptive Scene Recognition-A Multi-Agent Cloud-Edge Framework for Language-Guided Scene Detection
    Can Cui, Yongkang Liu, Seyhan Ucar, Juntong Peng, Ahmadreza Moradipari, Maryam Khabazi, and Ziran Wang
    In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing: Industry Track (EMNLP) , 2025
  3. ITSC
    Personalized Autonomous Driving with Large Language Models: Field Experiments
    Can Cui, Zichong Yang, Yupeng Zhou, Yunsheng MaJuanwu Lu, Lingxi Li, Yaobin Chen, Jitesh Panchal, and Ziran Wang
    In 2024 IEEE 27th International Conference on Intelligent Transportation Systems (ITSC) (ITSC) , 2024
  4. CVPR
    LaMPilot: An Open Benchmark Dataset for Autonomous Driving with Language Model Programs
    Yunsheng Ma*Can Cui*, Xu Cao*, Wenqian Ye, Peiran Liu, Juanwu Lu, Amr Abdelraouf, Rohit Gupta, Kyungtae Han, Aniket Bera, James M. Rehg, and Ziran Wang
    In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , 2024
  5. T-IV
    REDFormer: Radar Enlightens the Darkness of Camera Perception with Transformers
    Can CuiYunsheng MaJuanwu Lu, and Ziran Wang
    In Proceedings of the IEEE Transactions on Intelligent Vehicles (T-IV) , 2023