Operation Squirrel is built around a simple idea: students should
fly early and fly often so they can progressively unlock the skills needed for careers in
engineering, software development, and beyond. This is a deeply hands-on project.
Success is built from many small wins that compound over time. Failing early and failing fast is just as
important as those small victories - real learning happens when students try, receive feedback, and refine
their approach.
Operation Squirrel is designed to help students develop practical,
career-relevant skills through hands-on robotics and AI projects.
Track 1 - Drone Basics, Safety, Early Flight, and AI intro
Duration: ~1-2 weeks | Level: Beginner
Students get into the air quickly while learning safe operating practices and the fundamentals of working
with drones. They fly manually and in assisted modes, view live telemetry, and begin communicating with
their drone using the Jetson Orin Nano. They also run their first AI detection demo. By the end of Track 1,
students can navigate the ArduPilot drone ecosystem and connect their aircraft to the Operation Squirrel
autonomous stack.
Unboxing, hardware overview, and Operation Squirrel introduction
Introduction to the NVIDIA Jetson Orin Nano, Linux, SSH, and development tools
Connecting the Jetson Orin Nano to SITL
Connecting the Jetson Orin Nano to the physical drone
Real time AI detection on Jetson Orin Nano
Outcomes:
Identify and describe the core components of a drone system
Perform basic pre-flight and safety checks
Control a simulated drone using SITL
Control a real drone using a ground station
Interpret basic telemetry data from Mission Planner / QGroundControl
Send and receive basic MAVLink messages and commands to a real/simulated drone
Navigate the Jetson development environment (Linux, SSH, development tools)
Control a real/simulated drone from the Jetson
Explain how companion computers, MAVLink, and ArduPilot work together
Run a basic AI detection demo on the Jetson Orin Nano
Track 2 - MAVLink Motion Control Fundamentals
Duration: ~1 week | Level: Beginner-Intermediate
Before building their own controllers, students explore how drones interpret high-level
motion commands through MAVLink. They send velocity, position, and acceleration setpoints
from the Jetson Orin Nano, observe how the flight controller responds, and analyze
real-world effects such as overshoot, delay, and sensor noise. This hands-on module builds the intuition needed
for designing PID controllers in Track 2.
Sending position setpoints via MAVLink
Sending velocity setpoints via MAVLink
Sending acceleration setpoints via MAVLink
Understanding Guided and Guided-NOGPS modes
Reference frames: Local NED vs body-frame control
Observing overshoot, lag, and stabilization behavior
Basic step-input testing (move, stop, rotate)
Analyzing response in SITL before testing on hardware
Testing MAVLink control with real drone
Outcomes:
Send and interpret MAVLink position, velocity, and acceleration commands
Explain how the flight controller stabilizes motion beneath high-level commands
Identify overshoot, lag, and noise in drone responses
Use simulation to analyze command-response behavior
Prepare filtered sensor inputs and control logic for PID development
Track 3 - Control Systems: PID, Filtering & Smooth Motion
Duration: ~3-5 weeks | Level: Intermediate
Students learn how to turn noisy, real-world drone measurements into smooth and predictable drone motion.
They implement PID controllers, apply filtering to reduce noise, and use simulation
to tune their control loops before validating their designs on the real drone in supervised,
constrained scenarios.
Understanding control concepts: error, proportional / integral / derivative terms
Implementing a simple PID loop
Expanding to 2D PID control for target-relative motion
Low-pass filtering to reduce measurement noise
Gain tuning: adjusting P, I, and D for stability and responsiveness
Using simulation to iterate and tune control loops safely
Understanding saturation, safety limits, and actuator constraints
Outcomes:
Explain how PID controllers work and describe the role of each term (P, I, and D)
Implement a functional 1D PID controller in simulation
Expand PID logic to 2D control for tracking or position adjustment
Apply low-pass filtering to smooth noisy measurements
Tune PID gains using simulation before deploying to real hardware
Validate PID performance on the real drone in supervised, constrained tests
Design a simple autonomous behavior using filtered sensor measurements and PID output
Describe how saturation limits and safety constraints protect the vehicle during control
Track 3.5 - Rapid Development & Tuning with OSRemote
Duration: ~1-2 days | Level: Intermediate
After experiencing the challenges of manual PID tuning, students learn how to dramatically
speed up development using the OSRemote app. They adjust gains, filters, and behavior parameters
in real time without rebuilding code, enabling rapid iteration and more intuitive controller design.
Connecting OSRemote to the Jetson for live parameter editing
Real-time tuning of PID gains and filter coefficients
Testing instant behavior changes in simulation and real flight
Creating and switching between tuning profiles
Understanding runtime vs build-time parameters
Outcomes:
Use OSRemote to tune parameters in real time
Improve controller performance with rapid iteration cycles
Create and test parameter profiles for various behaviors
Apply fast tuning methods to future KF and autonomy modules
Track 4 - Tracking & State Estimation (Kalman Filters)
Students learn why raw detection data alone is not ideal for reliable autonomy (too noisy, delayed, or
inconsistent). They build Kalman Filters to estimate smooth, continuous target motion from noisy camera
measurements. Students validate their filters in both SITL simulation and supervised real flight, tune
noise models, visualize performance live, and integrate KF outputs into the control system built in
Track 2.
Why state estimation is required: measurement noise, jitter, delay, and dropout
Difference between sensor measurements and model-based predictions
Prediction step: estimating future state from motion models
Update step: incorporating new sensor measurements
Kalman Gain: balancing trust between prediction and measurement
Building a 1D → 2D Kalman Filter (position + velocity)
Testing and visualizing KF output in SITL simulation
Injecting artificial noise into simulation to study filter behavior
Running Kalman Filters in real-time on the drone during supervised flight
Tuning Q (process noise) and R (measurement noise) matrices
Comparing raw detection vs KF estimate vs predicted positions
Using KF output to drive smoother and more stable PID behaviors
Understanding limitations: unmodeled motion, delays, and edge cases
Outcomes:
Explain the role of state estimation in robotics and autonomy
Implement and test a Kalman Filter in simulation and real flight
Tune Q and R to balance prediction accuracy and measurement trust
Analyze KF behavior using both SITL data and real flight logs
Replace noisy raw detections with stable KF estimates for control
Integrate KF output into PID-based autonomous behaviors
Diagnose common KF issues (latency, divergence, under/overfitting)
Track 5 - Full Autonomy & Capstone Projects
Duration: ~4-6+ weeks | Level: Advanced
Students combine perception, tracking, and control into a complete autonomy pipeline. They design,
test, and deploy a custom autonomous behavior of their choice, using the perception, filtering,
and control tools built throughout the course. The capstone project is intentionally open-ended,
allowing students to explore creative ideas while following a structured simulation → tethered →
supervised flight workflow.
Integrating YOLO detection, KF tracking, and PID control loops
Understanding sensor measurements vs model-based predictions
Predicting future target motion for smoother following
Designing safe, reliable, and robust autonomous behaviors
Capstone options: person-follow, payload timing, gesture control, reacquisition, etc.
Outcomes:
Build a complete perception → tracking → control autonomy stack
Design and implement a custom autonomous behavior from scratch
Use prediction and filtering to improve autonomy performance
Conduct safe supervised flight tests of student-written autonomy code
Present a multi-week capstone project demonstrating engineering mastery
Additional Tracks
Machine Learning model training, model fine tuning, model deployment (embedded), custom CUDA layers (PyTorch) -> onnx -> TensorRT -> custom CUDA kernel to run that layer
CUDA kernels Image signal processing, camera rectification, decompanding, undistoration, denoise
Sample Lesson Titles
Below are example lessons drawn from across the tracks to illustrate the type of hands-on work students do throughout the course:
Track 1 - Drone Basics, Safety & AI Intro
First Flight: Safety, Arming, and Assisted Modes
Inside the Drone: Hardware, Sensors, and What They Actually Do
Mission Planner & QGroundControl: Reading Telemetry Like an Engineer
Your First AI Demo: Running Real-Time Object Detection on Jetson
Track 2 - MAVLink Motion Control
Talking to the Drone: Sending Your First MAVLink Motion Commands
Stop, Go, Hover: Understanding Step Responses and Overshoot
Track 3 - PID Control Systems
From Error to Action: Building a Simple PID Controller
Tuning P, I, and D: Stability vs Responsiveness
Smooth Motion: Filtering Noisy Signals with Low-Pass Filters
Track 4 - Tracking & Kalman Filters
Why Raw Data Fails: Filtering and State Estimation Basics
Predict, Update, Correct: Building a 2D Kalman Filter for Tracking
Track 5 - Full Autonomy & Deployment
From Simulation to Sky: Deploying a Complete Autonomous Behavior
Student Learning Outcomes
By the end of the full sequence, students will be able to:
Safely operate and configure a fully autonomous drone platform.
Work in a Linux + Docker environment on NVIDIA Jetson hardware.
Run real-time AI perception (YOLO + OpenCV) on live camera feeds.
Design, tune, and debug basic control loops (PID) for motion.
Implement a simple Kalman filter for target tracking and prediction.
Integrate perception, estimation, and control into a working autonomy stack.
Use logged data (MCAP) to analyze, debug, and improve system performance.
Learn More
For more information about the curriculum or current development status,
please get in touch.