About

Advances in low-cost low-power silicon radio frequency (RF) integrated circuits (ICs) in the last two decades have opened up the commercial applications for millimeter wave (mmWave) frequencies which are an order of magnitude beyond those used in WiFi and cellular today. Large-scale deployment of mmWave communication networks, such as NextG cellular infrastructure outdoors and NextG WiFi infrastructure indoors, implies that these resources can be leveraged for RF imaging at scales that are not otherwise possible. This project seeks to lay the intellectual foundations for Joint Communication and Imaging (JCAI) at city scales using this emerging mmWave infrastructure. Each sensor in such a system provides 4D measurements (range, Doppler, azimuth angle and elevation angle) whose resolution improves by going to higher frequencies.

This three-year project (2022-25) is funded by the National Science Foundation under grant CNS-2215646.  It is a cross-disciplinary collaboration between research leaders in communications & MIMO radar and imaging, robotics for communication and sensing, RFIC design and packaging for  massive mmWave MIMO arrays, and large-scale programmable networks for communications and sensing.

Technical Approach

We take advantage of 4D measurements (range, Doppler, azimuth and elevation angles) at unprecedented resolution: higher carrier frequencies enhance Doppler resolution; larger bandwidths enhance range resolution; tiny carrier wavelengths make it possible to compactly realize 2D antenna arrays with a large number of elements, enhancing azimuth and elevation angular resolution. The key aspects of our technical plan are as follows:

(1) Significantly increased imaging resolution by creating large effective apertures with networked collaboration, using the scale provided by a fixed wireless infrastructure, along with strategic use of unmanned vehicles;

(2) Developing a control plane for multi-function network operation for JCAI, including a resource management framework based on concepts such as imaging demand and imaging capacity, and protocols supporting collaborative imaging;

(3) Developing platforms for experimentation in JCAI based on off-the-shelf mmWave hardware at 60 and 77 GHz, as well as hardware beyond 100 GHz developed under other programs.

Project Team

PIs

Graduate students and postdocs (Current)

Graduate students and postdocs (Previous)

Research Activities

Our research activities include: demonstrating novel approaches of utilizing RF sensing, architectures and signal processing algorithms for networked sensing, and hardware development for accessing bands beyond 100 GHz.  

Imaging edges: When there is no motion, imaging still objects with radio waves is a challenge because information from Doppler is no longer available for segmenting objects.  While most prior work has focused on imaging object surfaces, we consider the RF imaging problem from a different perspective by focusing on object edges.  When an incoming wave is incident on an edge, it results in a cone of outgoing rays, referred to as Keller cone, dictated by the Geometrical Theory of Diffraction. Depending on the location and orientation of the edge, it leaves different footprints on a receiver array, which can act as its signature for imaging purposes.  We have proposed a new processing pipeline around this principle, which can image still objects by tracing their edges.  Analysis and experimental results for RF-based edge tracing have been published in IEEE RadarConf  2023.  

Beyond Field-of-View Target localization: mmWave radar has a limited imaging field-of-view due to high directionality and reliance on single-bounce scatter from objects being imaged. We proposed exploiting natural multi-bounce scattering in the environment to enable mmWave radar imaging of objects beyond the single-bounce field-of-view (e.g., around corners and behind the radar). Prior research on using multi-bounce radar sensing makes specific assumptions on the number of bounces, requires additional hardware or assumes prior knowledge of the environment. In contrast, our method exploits various orders of multi-bounce (ranging from single-bounce to triple-bounce)  and requires no additional hardware or prior knowledge about the environment. There are two core innovations in our method: (i) a matched filtering algorithm that can directly localize objects at their ground-truth locations along specific multi-bounce paths, (ii) a sequential iterative pipeline that performs matched filtering and object detection separately and sequentially along single-, double- and triple-bounce paths, and uses object detections from previous iterations to compensate for the radar’s lack of prior environment knowledge. Our implementation on a commercial millimeter-wave MIMO radar testbed demonstrates 2×-10× improvement in the median localization error for humans standing outside the radar’s field-of-view in various indoor and outdoor scenarios. The results have been presented at ACM MobiCom 2024.

Collaborative Inference using Networked MIMO Radar Sensing: We are building the core components of an architecture for collaborative inference using high-resolution mmWave MIMO radar nodes. Each node can track targets in its field of view (FOV) using range, Doppler, and angle, but can only estimate the radial velocity component and is limited in angle measurement by its aperture size. By leveraging multiple radars, we enable full velocity vector estimation with one-shot fusion, achieving robust multi-target tracking. Collaborative networked sensing also unlocks new capabilities such as wide-area “cellular-style” coverage, improved resolution from a larger effective aperture, and resilience to FOV and line-of-sight (LoS) limitations. However, a key requirement for collaborative tracking and track-level fusion is calibration—knowing the relative positions and orientations of each radar. We proposed an autocalibration strategy based on joint target tracking and pose estimation, by fusing measurements at a centralized data fusion center, corresponding to a moving target seen by multiple radars in their overlapping FoV. For 2D scenes, we have derived an optimal algorithm with a closed-form solution that enables any two nodes tracking a common target to determine their relative poses by matching their estimated tracks. This initial work was presented at the Asilomar 2024 conference. Furthermore, we experimentally demonstrated this approach using TI MIMO radar platforms (AWR2243BOOST), showing successful self-calibration with two nodes and a moving human target.  We also illustrated one specific application of a self-calibrated network: one-shot fusion of observations from collaborating nodes to obtain instantaneous estimates of position and velocity. While a single MIMO radar only provides estimates of radial velocity and vector velocity is estimated based on tracking over multiple frames, collaborating radars that are calibrated can provide instantaneous estimates of the full velocity vector. Maximum Likelihood (ML) fusion, which has been considered in prior work, fails in geometrically degenerate settings. We developed and experimentally demonstrated a Bayesian framework for one-shot fusion that handles geometric degeneracies using priors on target motion. This work has been accepted for publication at RadarConf 2025. Next, we plan to extend this work to complex multipath environments and address ghost target removal and static scene reconstruction.

Velocity estimation using MIMO radar: MIMO radars can estimate radial velocities of moving objects, but not their tangential velocities. We proposed a method that exploits multi-bounce scattering in the environment to enable estimating a moving object’s entire velocity vector – both tangential and radial velocities. Classical approaches to the velocity vector estimation problem involve tracking targets over multiple frames. Our proposed method enables instantaneous velocity vector estimation with a single MIMO radar, without additional sensors or assumptions about the object size. The only requirement of our method is the existence of at least one angle-resolvable multi-bounce path to/from the object due to static landmarks in the environment. We tested our proposed approach using simulations and experiments with TI’s mmWave MIMO radar, AWR2243 cascade. Initial results have been published and presented at IEEE CISA 2024. We further extended this work by including theoretical analysis and additional results. This work has been accepted for publication in IEEE Transactions on Computational Imaging.

Implicit shape estimation from sub-THz SAR: Commercial radar systems typically produce 3D images with limited range resolution, resulting in coarse detail that limits interpretability for higher-level tasks such as 3D object detection and classification. To address this limitation, we utilize an experimental 110-260 GHz SAR to generate millimeter-resolution radar point clouds, enabling us to synthesize geometric information such as object shape. We introduce Radar Implicit Shapes (RADISH), a post-processing method that combines traditional radar detection with modern computer vision techniques. RADISH first identifies surface scattering points from radar backprojection images through a constant false alarm thresholding. A signed distance function is then fit to the point cloud to implicitly represent the object surface boundary. We experimentally demonstrate that RADISH can generate smooth and natural 3D shape renderings of imaged objects, potentially aiding in downstream tasks such as Earth remote sensing or concealed object classification. Our initial set of results have been published in IEEE CISA 2025, with ongoing extensions exploring how RADISH performs on a variety of radio materials.

Crowd analytics using off-the-shelf mmWave MIMO radar:
We have developed a new mathematical modeling and processing pipeline for crowd analytics (e.g., crowd counting, anomaly detection, etc) using mmWave radar. A major bottleneck when sensing crowds with mmWave signals is the blockage caused by the first layer of people, since mmWave signals are significantly attenuated after passing through the human body.  This crowd shadowing can significantly affect sensing quality in crowded areas.  To address this, we have derived a novel closed-form mathematical expression that can characterize the statistical dynamics of undercounting due to crowd shadowing. This new approach enables estimation of large crowds with mmWave signals, and may have significant impact on crowd management and urban planning.  Our initial results, which include extensive experiments using off-the-shelf mmWave radar (e.g., TI AWR2243BOOST), have been presented at ACM Mobicom 2024.
Moreover, extracting key crowd semantics with off-the-shelf mmWave transceivers is a considerably challenging and unsolved problem. We have proposed a new mathematical foundation to enable this.  More specifically, we have proposed a signal processing pipeline that combines optical flow estimation concepts from vision with novel statistical and morphological noise filtering to generate high-fidelity mmWave flow fields.  We then introduce a novel approach that transforms these fields into directed geometric graphs, where edges capture dominant flow currents. We further show how to extract key crowd semantics by analyzing the local Jacobian and computing the corresponding curl and divergence.  We finally show how to estimate the crowd size for any underlying crowd spatial usage pattern using tools from stochastic geometry. We have successfully confirmed the theoretical foundation by conducting 21 experiments on crowds of up to (and including) 20 people across 3 areas, using commodity mmWave radar (e.g., TI AWR2243BOOST). There are two crowd-related papers, one under review in Nature Portfolio Journal (NPJ) on Wireless Technology and one to appear in Asilomar 2025. It is noteworthy that the Asilomar paper is currently selected as one of a small number of finalists in the Asilomar Student Paper Contest, with the Best Student Paper Award to be determined at the November conference. Overall, the work can have a significant impact on crowd sensing, management and planning.

100+ GHz radar testbed: We are setting up a 140 GHz radar testbed at UCSB, leveraging hardware developed under previous programs as well new hardware being developed under 4D100 and related programs. Under previous programs (the DARPA/SRC JUMP1.0 center ComSenTer), we had developed 140 GHz CMOS single--channel transmitter and receiver ICs, and had integrated these into 8-channel MIMO arrays using LTCC packages carrying 8  such IC and 8-element antenna arrays. To support this development, under ComSenTer we had also made, as test structures, LTCC packages with a single antenna to accommodate  a single transmitter or receiver IC. Under 4D100, we have taken the transmitter and receiver ICs and mounted them onto the single-antenna LTCC substrates, to thereby form single-element transmitter and receiver modules. We are presently working to assemble an 8-transmitter, 8-receiver MIMO radar testbed using these ICs.  Testbed demonstrations of 140 GHz MIMO radar involves close collaboration between mmWave hardware experts (Buckwalter and Rodwell), and modeling/algorithms experts (Madhow, Mostofi, and Sabharwal). 

Hardware design at 100+ GHz:

We are building highly power-efficient wideband transmit arrays at 100+GHz in low-cost semiconductor processes.  Our initial effort focused on a 22-nm CMOS SOI 4-channel MIMO tile. The low-resolution transmitter uses only 2 bits to control the output phase between I and Q planes while supporting an extremely wide bandwidth covering 110-160GHz to add some ease to tuning for future applications and demonstrations. Further, we completed an LTCC package with using substrate integrated waveguides that transition the 140-GHz signals through C4 bumps into the SIW waveguide for routing to lambda/2-spaced Vivaldi antennas. Finally, we fabricated a test board to handle RF and DC connectors to the LTCC package.

These first generation 140 GHz radar transmitter chips, fabricated in a 22-nm CMOS SOI chip with 4-channels, were demonstrated to achieve an EIRP (integrated within our antenna module) of 20 dBm, supporting the output power per channel of  50 mW per channel, indicating a power added efficiency of 8%. During the last year, UCSB completed measurement of five 140 GHz radar modules and reported these at RFIC 2025. Additionally, we have verified all the building blocks of our 140 GHz radar receivers, demonstrating a noise figure in D-band of 7.5 dB. Collaboration with Professor Muhammed Bakir's group at Georgia Tech has assisting our group in the fabrication of the antenna on glass substrate. Antenna samples have been measured to verify performance, demonstrating a match bandwidth from 120 GHz to 160 GHz with close match to return loss simulation. Issues with fabrication were characterized to develop process improvements in a second fabrication run. Additionally, PCBs and mounting blocks were designed and fabricated to support the chip and antenna to be tested on a gimbal in an antenna test chamber. We are also working on packaged versions of 70 and 140 GHz phased-array transmitters and receivers for radar systems experiments, leveraging additional ICs being designed under contract support from the NSF RINGS (140 GHz) and SRC-JUMP2.0 (75 GHz) programs. The support from this project is being used for array antenna, module and package integration and array beam demonstration.  Results have not yet been obtained, as the ICs have required several design revisions and are not yet ready.

 

Publications

Code repositories

  • FusionSense: This repository contains the source code for the FusionSense team as a part of UC Santa Barbara's Electrical Engineering Senior Design Capstone project.
  • Compressive MIMO for Extended Targets: This repository contains the source code for extended target modeling for compressive MIMO radar.

Broader Impact

The increasingly used term Joint Communication and Sensing (JCAS) reflects an emerging consensus that next-generation wireless networks should be multi-function, supporting both communication and sensing at scale.  Our vision of JCAI provides a concrete shape to this trend, viewing imaging as a layered network service analogous to data communication, and pushing the limits of resolution and energy efficiency so as to make it attractive to deploy this service at scale. The concepts and methods we develop have potential impact in a vast array of applications, including vehicular autonomy and road safety, manufacturing automation, indoor and outdoor security, eldercare, and healthcare.  The PIs will work closely with industry partners, building on their strong track record in transitioning mmWave research, to maximize the impact of this research.  UCSB is a minority-serving institution, and we will leverage the diversity of our undergraduate population for recruitment of REU researchers. The proposed research is synergistic with ongoing curriculum reform at UCSB aimed at increasing the flexibility of undergraduates to specialize in sub-disciplines of interest, and will be incorporated into the undergraduate curriculum through courses, capstone projects, and REU projects.

Educational Resources

Software lab on mmWave radar developed for UCSB undergraduate communications sequence

Experimental Data used in the lab (Thanks to Prof. Yasamin Mostofi's group for collecting the data):

System Setup - Two mmWave radars tracking human targets.

A brief description of the experimental data format and the chirp parameters can be found here.

(a) Single moving human target seen from the perspective of two radars (Straight Trajectory) (Radar 1 DataRadar 2 Data).

(b) Single moving human target seen from the perspective of two radars (Random Trajectory) (Radar 1 DataRadar 2 Data).