AI Research Portfolio

AI Researcher

Emmanuel Muñiz

A curated collection of my work in Research, Trading, and Robotics. Each project entry details my process, technical implementation, and the tools utilized. Only featured repositories and their associated media are displayed here to ensure a focused overview of my technical expertise.

Multimodal AI Transformers Interpretability (XAI) GenAI LLMs & SLMs Self-Supervised Reinforcement Learning CV & NLP

Research

Medical AI research focusing on two primary domains: radiology and cerebral palsy.

Radiology Report Generation Research

What I did: Developed a novel implementation distinct from current SOTA approaches to maximize information retention in radiology report generation. This led to LAnA ↗ (Layerwise Anatomical Attention), an anatomy-guided model collection that mimics a doctor's diagnostic perception. The model is lightweight, runs on a standard CPU in ~10 seconds, and has garnered over 3,000 Hugging Face downloads.

How I did it: Rather than relying on prompt engineering, I explicitly injected segmented clinical regions into the model by modifying GPT-2's internal attention mechanisms layer-by-layer. Trained entirely on GCP under a strict $300 budget, this architectural innovation outperformed vanilla baselines and was published to arXiv and deployed as an interactive Hugging Face Space.

What I used: PyTorch, Hugging Face (Transformers, PEFT, Accelerate), DINOv3, GPT-based Decoders, Layer-Wise Anatomical Attention, LoRA/QLoRA, Vertex AI (GCP), Docker, and NVIDIA A100.

Publication • December 2025

Radiology Report Generation with Layer-Wise Anatomical Attention

First Author & Lead Researcher • arXiv:2512.16841

Read Paper ↗

Cerebral Palsy Research

What I did: Applied advanced clustering and predictive modeling to a dataset of 4,000+ medical records to identify neonatal risk phenotypes and improve early detection of cerebral palsy.

How I did it: Developed a robust analytical pipeline using Random Forest models for classification and K-Means/Hierarchical clustering for phenotype discovery. Validated model performance and stability using Bootstrap ARI and Silhouette metrics, achieving 92% accuracy in high-risk group identification.

What I used: Scikit-learn, Pandas, NumPy, Random Forest, K-Means, Hierarchical Clustering, Bootstrap ARI, Silhouette Metrics, and Matplotlib/Seaborn.

Journal Submission • 2026

Neonatal Risk Phenotypes for Cerebral Palsy Based on Integrated Perinatal Exposures: A Cohort Study

Co-Author • European Journal of Paediatric Neurology

Trading Projects

What I did: Developed systematic, probability-driven automated trading strategies and portfolio management algorithms.

How I did it: By applying reinforcement learning for dynamic portfolio balancing and training deep sequential models on time-series and sentiment data.

What I used: Reinforcement Learning (Stable Baselines3), OpenAI Gymnasium, LSTMs, Time Series Modeling, VADER (Sentiment Analysis), and Python/Pandas.

Robotics & Vision

What I did: Built autonomous navigation, physical manipulation, and real-time computer vision tracking systems.

How I did it: Integrated software models with physical actuators (UR3e, omnidirectional cars) and simulated environments (AirSim) using real-time visual feedback loops.

What I used: Computer Vision, OpenCV, YOLO, ROS, Universal Robots SDK, PID Controllers, Path Planning, and Arduino/C++.

Experience timeline

Resume-backed roles and research milestones.