Ph.D., Robotics Institute
School of Computer Science
Carnegie Mellon University

Office: Smith Hall 210
Email: xinshuow@cs.cmu.edu

 total page visits since 06/20/2017
Free Web Counters unique visits since 06/20/2017

Xinshuo Weng is a Ph.D. student (2018-) at Robotics Institute of Carnegie Mellon University (CMU) supervised by Kris Kitani. She received Masters (2016-17) also at CMU, where she worked with Yaser Sheikh and Kris Kitani. Prior to CMU, she worked at Oculus Research Pittsburgh (now Facebook Reality Lab) as a research engineer. Her Bachelor's degree was received from Wuhan University. Her primary research interest lies in 3D computer vision and Graph Neural Networks for autonomous systems. She was awarded a Qualcomm Innovation Fellowship for 2020-2021.

Fields: Computer Vision, Machine Learning, Robotics, Multimedia
Topics: 3D Computer Vision, Autonomous Driving, Graph Neural Networks, Generative Modeling, Video Analysis

Google Scholar  /  GitHub  /  LinkedIn  /  Twitter  /  Facebook  /  ResearchGate  /  Semantic Scholar
  • 10/2020 - One paper accepted at CoRL 2020
  • 09/2020 - One paper accepted at ISARC 2020
  • 08/2020 - I am honored to have received the Qualcomm Innovation Fellowship 2020
  • 08/2020 - Keynote Speaker at ECCV 2020 Workshop on Benchmarking Trajectory Forecasting Models [Slides]
  • 08/2020 - I am co-organizing NeurIPS 2020 Workshop on Machine Learning for Autonomous Driving
  • 08/2020 - Four (one oral, three spotlight) papers accepted at ECCV 2020 Workshops
  • 06/2020 - Two papers accepted at IROS 2020
  • 06/2020 - Keynote Speaker at CVPR 2020 Workshop on Scalibility in Autonomous Driving [Slides] [Video]
  • 04/2020 - One paper accepted at TPAMI 2020
  • 03/2020 - One paper accepted at CVPR 2020
  • 08/2019 - One paper accepted at ICCV Workshops 2019
  • 06/2019 - We release the code for our 3D MOT paper here
  • 06/2019 - Three papers accepted at BMVC 2019, ACMMM 2019, IROS 2019
  • See More
  • 01/2018 - One paper accepted at CVPR 2018
  • 10/2017 - One paper accepted at WACV 2018
  • All-in-One Drive: A Large-Scale and Comprehensive Perception Dataset with High-Density Long-Range Point Cloud
    Xinshuo Weng, Yunze Man, Dazhi Cheng, Jinhyung Park, Matthew O'Toole, Kris Kitani
    arXiv, 2020
    PDF | Code | BibTex
    The largest autonomous driving dataset with a super-set of synthetic sensors, complete annotations for all perception tasks and rare driving situations


    Inverting the Forecasting Pipeline with SPF2: Sequential Pointcloud Forecasting for Sequential Pose Forecasting
    Xinshuo Weng, Jianren Wang, Sergey Levine, Kris Kitani, Nick Rhinehart
    Conference on Robot Learning (CoRL), 2020
    PDF | Code | Website | BibTex
    By learning to forecast future LiDAR point clouds, we build a brand-new pipeline for trajectory forecasting without requiring trajectory labels


    Joint 3D Tracking and Forecasting with Graph Neural Network and Diversity Sampling
    Xinshuo Weng*, Ye Yuan*, Kris Kitani
    arXiv:2003.07847, 2020
    PDF | Code | Website | BibTex
    The first unified 3D MOT and trajectory forecasting method with object interaction modeling and diverse trajectory samples

    GNN3DMOT: Graph Neural Network for 3D Multi-Object Tracking with 2D-3D Multi-Feature Learning
    Xinshuo Weng, Yongxin Wang, Yunze Man, Kris Kitani
    IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020
    PDF | Code | Demo | Website | Slides | BibTex
    The first multi-object tracking method that leverages Graph Neural Network for object interaction modeling

    3D Multi-Object Tracking: A Baseline and New Evaluation Metrics
    Xinshuo Weng, Jianren Wang, David Held, Kris Kitani
    IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2020
    (Oral Presentation)
    PDF | Code | Demo | Website | Slides | BibTex
    A 3D multi-object tracker that achieves state-of-the-art performance with the fastest speed

    Monocular 3D Object Detection with Pseudo-LiDAR Point Cloud
    Xinshuo Weng, Kris Kitani
    IEEE International Conference on Computer Vision (ICCV) Workshops, 2019.
    PDF | Code | Poster | BibTex
    By projecting the 2D image to a pseudo-LiDAR point cloud representation, our monocular 3D detection pipeline quadruples the performance over prior art

    Back to top