Machine Learning & Perception for Robotic Control
@ RPM Lab (SNU) & MSERVO Lab (Yonsei University)

Introduction
Before robots can act on what they see, perception outputs have to be reliably connected to control systems in real time — with latency, synchronization, and hardware constraints all working against you. This project built foundational experience in that pipeline, working across two academic labs in Korea.
Methods
1. Deep Learning for Image Classification (SNU – RPM Lab)
At Seoul National University, I implemented neural network classifiers from scratch and trained models on standard benchmarks. Adapted a pretrained ResNet-50 architecture for real-time object detection under hardware compute constraints.
2. Computer Vision Integration with Robotics
(Yonsei – MSERVO Lab)
At Yonsei University, I built a ROS-based pipeline connecting live camera input to a Franka Emika manipulator. The system streamed visual detections and relayed results to robot control — prototyping a closed loop between perception and physical action.
Results
Classifiers achieved 80–89% accuracy across datasets. Perception-driven robot behavior demonstrated in ROS with live camera input driving Franka Emika reactions. Resolved hardware-software connectivity issues and optimized inference for limited onboard compute.
Discussion
The hardest part wasn't the model — it was the integration. Camera-to-model synchronization, inference latency under real-time constraints, and bridging ML outputs to robot control systems are engineering problems that benchmarks don't capture. This project made those gaps concrete and directly informed my later interest in sensing system design.
My Contributions
Implemented deep learning classifiers from scratch and adapted ResNet-50 for real-time detection.
Built the full perception-to-ROS pipeline: camera streaming, detection relay, and Franka Emika integration.
Resolved hardware-software connectivity issues and optimized inference performance under compute constraints.