Reeshad Khan

Efficient AI (EdgeAI) & Autonomous Systems Perception — BEV • Radar–Vision Fusion • RAW-to-Task Co-Design

GitHub Google Scholar

About Me

Profile Photo

I build efficient perception systems for autonomy that run real-time on edge hardware. My work spans camera-only BEV (TinyBEV), radar–vision fusion for adverse weather, sensor–optics–model co-design (RAW-to-task) for robustness, and evaluation frameworks for 3D reconstruction (4DCF). I care about latency, reliability, and deployment—profiling, pruning, quantization, TensorRT, and mixed precision to hit FPS targets without sacrificing safety.

Real-Time BEV EdgeAI Radar–Vision Fusion RAW-to-Task Co-Design Embedded Deployment 3D Perception

Research Focus

Efficient AI / EdgeAI

  • Model compression (pruning, quantization), TensorRT, AMP
  • Throughput-driven design for NVIDIA Orin/Jetson, RTX/Datacenter GPUs
  • Profiling: kernel hotspots, memory scheduling, batch-1 latency

Autonomous Perception

  • Camera-only BEV perception (detection, mapping, forecasting, planning)
  • Radar–Vision fusion for fog/rain/low-light robustness
  • Sensor–optics co-design (RAW-to-task) for semantics under noise/quantization

Flagship Projects

TinyBEV — Cross-Modal Distillation for Camera-Only BEV (ICCV 2025 WDFM)

Lightweight BEV model distilled from UniAD for real-time, camera-only multi-task autonomy: detection, mapping, forecasting, and planning. Designed for embedded deployment (EdgeAI).

  • Knowledge distillation to compress multi-modal teacher → compact student
  • Latency-aware training; on-device optimizations to meet FPS targets

Paper (soon) · Code & demos

Learning to Sense for Driving — RAW-to-Task Optics–Sensor–Model Co-Design (WACV 2026 under review)

End-to-end co-design of cellphone-scale optics, learnable CFA, Poisson–Gaussian noise, and quantization with a compact segmentation head. Improves mIoU and boundary accuracy under low-light, blur, and reduced bit-depth—robust semantics with deployment-grade runtime.

  • Exposure control + differentiable optics; straight-through quantization
  • Mixed precision, ablations, and Pareto (accuracy vs. latency) analysis

Preprint (under review)

Radar–Vision Fusion for Adverse Conditions (WACV 2026 project)

Modular fusion (early/late/attention) to maintain perception quality in rain, fog, and low light. Targets real-time inference and embedded deployment with ablation-ready components.

  • nuScenes-based pipelines; attention-weighted sensor reliability
  • Profiling, visualization, and safety-critical metrics tracking

Tech report (in prep)

4DCF — Robustness Evaluation for 3D Reconstruction (ISVC 2025 under review)

A unified robustness suite for 3D reconstruction: Spatial Smoothness, Scale Stability, Perturbation Robustness, and Temporal Consistency, with radar-chart diagnostics to expose failure modes beyond Chamfer/PSNR.

Preprint (under review)

Semantic Scene Understanding for Autonomous Systems

Real-time panoptic perception and 3D reconstruction across diverse environments with deployment-minded training (mixed precision, memory optimization, structured pruning).

Earlier work: unsupervised MRI denoising and reconstruction (VISAPP 2025; IEEE Access)—useful for technique development, but current focus is autonomy & EdgeAI.

Selected Publications

  1. TinyBEV: Cross-Modal Knowledge Distillation for Efficient Multi-Task BEV Perception and Planning. ICCV WDFM Workshop, 2025.
  2. Learning to Sense for Driving: Joint Optics–Sensor–Model Co-Design for Semantic Segmentation. WACV 2026, under review.
  3. 4DCF: A 4D Consistency Field for Robust Evaluation of 3D Scene Reconstruction. ISVC 2025, under review.
  4. From Noise Estimation to Restoration. VISAPP 2025.
  5. Learning from Oversampling for MRI Reconstruction. IEEE Access, 2024.

Full list on Google Scholar.

Skills

Perception

  • Detection, segmentation, panoptic, BEV
  • 3D reconstruction & evaluation (4DCF)

EdgeAI

  • Pruning, quantization, TensorRT, AMP
  • Kernel/memory profiling, batch-1 latency

Stacks & Tools

  • PyTorch, TensorFlow, Detectron2, OpenCV
  • CUDA, Docker, CI/CD, HPC clusters

Contact

Email: rk010@uark.edu
LinkedIn: linkedin.com/in/reeshad-khan-a50207106
GitHub: github.com/Reeshad-Khan