My academic research focuses on computer vision and deep learning, more specifically self-supervised learning, perception for robotics, and computational photography. Check out some of my projects below for an overview of what I have done so far and what keeps me busy today. (Note that most of the time, this page is a bit out of date ;-))

Tracking in 3D space (2020-)

(Supervised by Prof. Carl Vondrick, with Didac Suris, Abby Lu)


Image patch localization for crop detection (2019-)

(Supervised by Prof. Carl Vondrick)

Integrated gradients for input attribution visualization

Most existing work in digital forensics requires labeled examples of manipulated media in order to train a neural network that can distinguish real from fake. Consider instead the proxy task of retracing the original position of an image patch relative to the lens, using subtle clues such as chromatic aberration (see figure) and other lens imperfections. A lack of consistency in the model’s predictions over the full photo or video is a likely indicator of tampering, while less thorough modifications such as cropping may be indicated by biased predictions. The ongoing challenge is realizing a sufficiently accurate estimator that generalizes across various camera pipelines, settings, orientations and so on, demonstrating that images are not as translationally invariant as is commonly assumed.

Image outpainting using GANs (2019)

(Supervised by Prof. Peter Belhumeur)

Basic illustration of outpainting

Inpainting, which attempts to recover holes within photos, is an active line of research and typically uses Generative Adversarial Networks. However, the opposite problem of predicting what pixels reside outside the borders of an image (hence the term outpainting) is something I would also deem worthy of investigation. Many architectural aspects and ideas can moreover be naturally adopted from inpainting. Check out the submission on arXiv below to see which hallucinations our model has come up with!

Link to paper / Link to GitHub repository

FPGA-based Simultaneous Localization and Mapping using High-Level Synthesis (2018-2019)

(Supervised by Prof. Erik D’Hollander and Prof. Bart Goossens)

Colorized depth map (input) + integrated 3D model (output)

This master’s thesis at Ghent University was an exploration of a robotic navigation algorithm called Simultaneous Localization and Mapping (SLAM), which helps autonomous agents track their location within an environment while at the same time creating a dense model of that very environment. In an effort to implement a 3D variant of this application on embedded systems (more specifically, Field-Programmable Gate Arrays or FPGAs), I devised guidelines to facilitate the translation of platform-agnostic C/C++ code to FPGA-optimized designs using the High-Level Synthesis (HLS) design technique. Furthermore, I implemented and compared several mappings of typical software blocks to efficient dataflow architectures on programmable hardware.

Link to dissertation

Camera spoofing for digital forensics (2018)

Correlations of noise characteristics between spoofed images and authentic images captured by two different smartphone devices

Every camera model carries a fingerprint with it called sensor pattern noise, which can be used to trace a given photo back to the original device that captured it. In this short research project, we used Generative Adversarial Networks (more specifically, CycleGAN) to transform noise patterns of image patches from one device to another, in an attempt to fool classic camera identification tools typically used in media forensics. The hypothesis of this succeeding remains plausible, as our experiments indicated that the fingerprint had been effectively removed (as in the above graph), but not fully transferred across domains.