Prachi Garg

Hi! I have started my Ph.D. in Computer Science at the University of Illinois Urbana Champaign, where I am advised by Prof. Derek Hoiem. I like to build general purpose and adaptable vision and robotics systems, particularly (i) models that can continually acquire new skills and concepts over time, and (ii) enable user personalization through interaction and feedback.

Previously, I finished my Masters of Science in Robotics from Carnegie Mellon University. For my thesis research, I worked on continual personalization of human action recognition advised by Prof. Fernando De La Torre and in collaboration with Meta Reality Labs, XR Input Team. I spent summer 2024 building the multi-view camera-LiDAR 3D perception pipeline for the US Department of Transportation, Safe Intersection Challenge with Prof. Srinivasa Narsimhan.

In the past, I have had wonderful opportunities working with Prof. Vineeth N Balasubramanian (CMU and IIIT-Hyderabad), Prof. C V Jawahar (IIIT-Hyderabad), and Prof. Frederic Jurie (in beautiful Normandy, France).




Recent Publications


POET: Prompt Offset Tuning for Continual Human Action Adaptation

Prachi Garg, Joseph K J, Vineeth N B, Necati Cihan Camgoz, Chengde Wan, Kenrick Kin, Weiguang Si, Shugao Ma, Fernando De La Torre

ECCV 2024, Oral Presentation (2.32%)

Project Page / Paper / Code / Talk Video

Data-Free Class-Incremental Hand Gesture Recognition

Shubhra Aich*, Jesus Ruiz*, Zhenyu Lu, Prachi Garg, K J Joseph, Alvaro Garcia, Vineeth N B, Kenrick Kin, Chengde Wan, Necati Cihan Camgoz, Shugao Ma, Fernando De La Torre

ICCV 2023

Paper / Code

Multi-Domain Incremental Learning for Semantic Segmentation

Prachi Garg, Rohit Saluja, Vineeth N B, Chetan Arora, Anbumani Subramanian, C V Jawahar

WACV 2022

Paper / Video / Code / Poster / Supplementary


News

[Oct 2024] Attending ECCV in Milan, Italy to give an Oral Presentation talk on POET.
[Sept 2024] I had a fun summer working on multi-view camera-LiDAR 3D perception with Prof. Srinivas Narsimhan.
[Aug 2024] I have started my CS Ph.D at UIUC, super excited to work with Prof. Derek Hoiem.
[July 2024] My favourite work till date, POET on continually personalizing prompts is accepted to ECCV 2024.
[May 2024] I successfully defended my Masters thesis. Thesis
[Mar 2024] New blog on CMU AI Summer Scholars mentoring experience. Highly recommended.
[Jan 2024] CVPR 2024 Reviewer.
[Nov 2023] Gave a talk on 'Prompt Tuning for Practical Continual Learning’ at the Multi-Modal Foundation Models course. [Slides]
[Oct 2023] Presented our work 'Data-Free Class-Incremental Hand Gesture Recognition' at ICCV 2023 in Paris.
[Oct 2023] Our work on 'Continual Few-Shot Learning for Activity Recognition' using lightweight prompt tuning is under review!
[Jul 2023] Project Leader at CMU CS Pathways, AI Scholars Summer Program. My high school mentees built their first CV-ML project! [Slides]

Selected Research Projects


Towards an AI Infused System for Objectionable Content Detection in OTT [IBM Research Laboratory]

Prachi Garg, Shivang Chopra, Mudit Saxena, Anshu Yadav, Aditya Atri, Nishtha Madaan, Sameep Mehta

Blog-post / Demo

With the substantial increase in the consumption of OTT content in recent years, personalized objectionable content detection and filtering has become pertinent for making movie and TV series content suitable for family or children viewing. We propose an objectionable content detection framework which leverages multiple modalities like (i) videos, (ii) subtitle text and (iii) audio to detect (a) violence, (b) explicit NSFW content, and (c) offensive speech in videos.

Memorization and Generalization in CNNs using Soft Gating Mechanisms [Image Team GREYC, University of Caen Normandy]

Prachi Garg, Shivang Agarwal, Alexis Lechervy, Frederic Jurie

Technical Report / Code / Technical Report, Suboptimal ResNet Gating Mechanisms

A deep neural network learns patterns to hypothesize a large subset of samples that lie in-distribution and it memorises any out-of-distribution samples. While fitting to noise, the generalisation error increases and the DNN performs poorly on test set. In this work, we aim to examine if dedicating different layers to the generalizable and memorizable samples in a DNN could simplify the decision boundary learnt by the network and lead to improved generalization in DNNs. While the initial layers that are common to all examples tend to learn general patterns, we dedicate certain deeper additional layers in the network to memorise the out-of-distribution examples.


Website template from here and inspired from here.