Research

My academic research focuses on computer vision and deep learning, more specifically self-supervised learning, video representations with object permanence, perception for robotics, and computational photography. I believe that there exists a great deal of natural structure inherent to visual data that we can exploit and learn from without requiring all too much manual annotation, or even none whatsoever. Check out some of my projects below for an overview of what I have done so far and what keeps me busy today. (Note that this page may sometimes be out of date ;-))

Revealing Occlusions with 4D Neural Fields (2020-2021)

(Basile Van Hoorick, Purva Tendulkar, Didac Suris, Dennis Park, Simon Stent, Carl Vondrick; Published in CVPR 2022 with oral presentation.)

Although the motorcycle (circled in yellow) becomes fully occluded in the video, we can still perform many
visual recognition tasks, such as predicting its location, reconstructing its appearance, and classifying its semantic category.

For computer vision systems to operate in dynamic situations, they need to be able to represent and reason about object permanence. We introduce a framework for learning to estimate 4D visual representations from monocular RGB-D, which is able to persist objects, even once they become obstructed by occlusions. Unlike traditional video representations, we encode point clouds into a continuous representation, which permits the model to attend across the spatiotemporal context to resolve occlusions. On two large video datasets that we release along with this paper, our experiments show that the representation is able to successfully reveal occlusions for several tasks, without any architectural changes. Visualizations show that the attention mechanism automatically learns to follow occluded objects. Since our approach can be trained end-to-end and is easily adaptable, we believe it will be useful for handling occlusions in many video understanding tasks.

Link to paper / Link to website / Link to GitHub repository


Dissecting Image Crops (2019-2020)

(Basile Van Hoorick, Carl Vondrick; Published in ICCV 2021)

Lens with chromatic aberration, where green-colored light is magnified differently.

The elementary operation of cropping underpins nearly every computer vision system, ranging from data augmentation and translation invariance to computational photography and representation learning. This paper investigates the subtle traces introduced by this operation. For example, despite refinements to camera optics, lenses will leave behind certain clues, notably chromatic aberration and vignetting. Photographers also leave behind other clues relating to image aesthetics and scene composition. We study how to detect these traces, and investigate the impact that cropping has on the image distribution. While our aim is to dissect the fundamental impact of spatial crops, there are also a number of practical implications to our work, such as detecting image manipulations and equipping neural network researchers with a better understanding of shortcut learning.

Link to paper / Link to GitHub repository


Image Outpainting using GANs (2019)

(Basile Van Hoorick; In a course by Prof. Peter Belhumeur)

Basic illustration of outpainting

Inpainting, which attempts to recover holes within photos, is an active line of research and typically uses Generative Adversarial Networks. However, the opposite problem of predicting what pixels reside outside the borders of an image (hence the term outpainting) is something I would also deem worthy of investigation. Many architectural aspects and ideas can moreover be naturally adopted from inpainting. Check out the submission on arXiv below to see which hallucinations our model has come up with!

Link to paper / Link to GitHub repository


FPGA-based Simultaneous Localization and Mapping using High-Level Synthesis (2018-2019)

(Basile Van Hoorick; Supervised by Prof. Erik D’Hollander and Prof. Bart Goossens)

Colorized depth map (input) + integrated 3D model (output)

This master’s thesis at Ghent University was an exploration of a robotic navigation algorithm called Simultaneous Localization and Mapping (SLAM), which helps autonomous agents track their location within an environment while at the same time creating a dense model of that very environment. In an effort to implement a 3D variant of this application on embedded systems (more specifically, Field-Programmable Gate Arrays or FPGAs), I devised guidelines to facilitate the translation of platform-agnostic C/C++ code to FPGA-optimized designs using the High-Level Synthesis (HLS) design technique. Furthermore, I implemented and compared several mappings of typical software blocks to efficient dataflow architectures on programmable hardware.

Link to dissertation