I am a PhD researcher in AI and Music at the Centre for Digital Music (C4DM), Queen Mary University of London, where I focus on deep learning for multi-instrument music transcription. I bring cross-disciplinary experience from both academia and industry, having worked on spatial audio personalization at PlayStation London and neural audio effect systems at a music tech startup. My skill set spans audio signal processing, machine learning, and real-time systems, and I'm passionate about building intelligent tools that support musicians, producers and engineers in creative workflows.
Technical Skills
Programming and tools
Python, C C, MATLAB, Bash, Git, Docker
Experience with embedded systems, real-time audio applications (JUCE, Bela), and cloud/HPC environments
Machine Learning & Deep Learning
Frameworks: PyTorch, TensorFlow
Architectures: CNNs, LSTMs, Transformers, Autoencoders, VQ/VQGANs, Masked Autoencoders, JEPAs
Experienced in distributed training on Google Cloud (Vertex AI) and high-performance clusters
Audio & Signal Processing
Feature extraction, data augmentation, real-time audio processing
Music Information Retrieval (MIR), 3D audio, and Head-Related Transfer Functions (HRTFs) for video games
Analytical & Problem-Solving Skills
Strong foundation in mathematics, algorithms, and statistics
Quick to grasp new technical concepts across diverse areas
Communication and Teamwork
Experienced technical writer (papers, reports), with clear communication
Led AI reading groups in MIR; supported a visually impaired MSc student by adapting technical content into accessible formats
Comfortable working in collaborative, interdisciplinary teams