Operation Squirrel is built around a simple idea: students should
fly early and fly often so they can progressively unlock the skills needed for careers in
engineering, software development, and beyond. This is a deeply hands-on project.
Success is built from many small wins that compound over time. Failing early and failing fast is just as
important as those small victories - real learning happens when students try, receive feedback, and refine
their approach.
Operation Squirrel is designed to help students develop practical,
career-relevant skills through hands-on robotics and AI projects.
Track 1 – Foundations and Flight Readiness
Duration: 4 weeks | Level: Beginner
By the end of Track 1, students will have the core tools and foundation needed to begin
developing their own autonomy code on NVIDIA Jetson devices and explore autonomous flight.
Students start by building a strong foundation in simulation using ArduPilot SITL, then
learn how to use ground stations to monitor and control their drone. They integrate a
Jetson Orin Nano as a companion computer and connect software, simulation, hardware, and
AI into a single working system.
Lesson 01 – Operation Squirrel Introduction
Lesson 02 – Introduction to ArduPilot & SITL
Lesson 03 – Ground Stations & SITL
Lesson 04 – Ground Stations & Real Vehicles
Lesson 05 – MAVLink Basics
Lesson 06 – Companion Computers
Lesson 07 – Introduction to the Jetson Orin Nano, Linux, SSH & Development Tools
Lesson 08 – Connecting the Jetson to SITL
Lesson 09 – Connecting the Jetson to the Real Drone
(Bonus) Lesson 10 – Real-Time AI Detection on the Jetson Orin Nano
Outcomes:
Set up and run ArduPilot Software-In-The-Loop (SITL) simulations independently and with Mission Planner
Fly and control a simulated drone using Mission Planner and live telemetry
Connect a real drone to a ground station and safely command it in assisted flight modes
Interpret and analyze live telemetry from both simulated and real drones
Send and receive MAVLink commands and telemetry between the Jetson Orin Nano and ArduPilot (SITL and real hardware)
Integrate a Jetson Orin Nano as a companion computer for both simulated and physical drones
Use the Jetson Orin Nano (Linux, SSH, and development tools) to control drones programmatically
Control a simulated drone directly from the Jetson Orin Nano without a ground station
Control a real drone from the Jetson Orin Nano using MAVLink communication
Run a real-time AI object detection pipeline on the Jetson Orin Nano and visualize detections live
Track 2 - Autonomous Flight Systems
Duration: 4-6 weeks | Level: Intermediate
In Track 2, students move beyond basic flight mechanics and learn how a real autonomous
flight system is designed, structured, and brought online safely. Students build the concepts
needed to create clean, controlled autonomous motion by designing safety gates, analyzing
system state, and progressively introducing autonomous behaviors such as takeoff,
navigation, and motion control.
Lesson 01 – Designing a Safe Autonomous Flight System
Lesson 02 – Autonomous Flight System Architecture
Lesson 03 – Designing Autonomous Behavior
Lesson 04 – Autonomous Takeoff, Landing, and State Transitions
Lesson 05 – Autonomous Navigation Using GPS Commands
Lesson 06 – Local Reference Frames: From GPS to NED
Lesson 07 – Autonomous Navigation Using Position and Velocity Control
Lesson 08 – Shaping Autonomous Motion: Rates, Limits, and Smoothness
Design and understand state machines and state transitions in an autonomous flight system
Perform autonomous takeoff, landing, and navigation using a real and simulation
Safely execute autonomous flight using GPS, local NED position/velocity commands for real and simulated drones
Shape autonomous motion using rates, limits, and smoothing techniques
Diagnose autonomous flight behavior
Validate autonomous behaviors in SITL before deploying to real hardware
Evaluate perception-driven autonomous flight behavior and recognize when estimation and tracking are required
Track 3 - Perception and State Estimation for Autonomous Flight
Duration: 4-6 weeks | Level: Advanced
In Track 3, students learn how drones can perceive the world with a camera, and
transform that perception into autonomous flight. Building on the motion and control
foundations from Track 2, students work with video input and AI models to produce stable,
decision-ready targets for autonomous flight. They learn how raw sensor outputs are filtered,
tracked, and target states are estimated over time, and how state estimation enables reliable,
perception-driven behaviors such as autonomous following.
Lesson 01 – Designing a Perception Pipeline for Autonomy
Lesson 02 – Camera and Video Pipelines for Robotics
Lesson 03 – AI Object Detections as Measurements
Lesson 04 – Measurement Noise, Latency, and Dropout
Lesson 05 – Motion Models and Prediction
Lesson 06 – Kalman Filters: State, Prediction, and Update
Lesson 07 – Tuning Kalman Filters (Q, R, and Trust)
Lesson 08 – Target Tracking and Filtering for Autonomous Follow
Lesson 09 – Controlling Against Estimated State
Lesson 10 – Stable AI-Based Autonomous Follow
Outcomes:
Identify the sensor inputs required for perception-driven autonomous flight
Process live video input and extract AI detection outputs
Interpret AI detections as measurements rather than control commands
Determine what information is missing from raw sensor outputs for stable control
Connect sensor measurements to motion commands used in Track 2 autonomy
Capstone
Duration: 4–6+ weeks | Level: Advanced
The capstone represents a transition from guided learning to full student ownership.
Students define, design, and implement an autonomous system of their choosing using
the autonomy stack, simulation tools, and flight workflows developed throughout the course.
While project goals are student-driven, all capstone work follows shared safety,
validation, and testing expectations.
Capstone guidance:
Projects must be grounded in the provided autonomy software stack
All autonomy logic must be validated in simulation prior to real-world flight
Flight testing is conducted in supervised and safety-constrained environments
Design decisions should be informed by observed behavior, logs, and system state
Projects may evolve in scope based on testing results and system limitations
Capstone outcomes:
Design and deploy a complete autonomous system using perception, estimation, and control
Demonstrate safe, repeatable autonomous behavior in simulation and supervised flight
Apply system-level reasoning to balance safety, performance, and robustness
Diagnose and iterate on autonomous behavior using logs and observed system behavior
Communicate technical decisions, tradeoffs, and results clearly
Additional Tracks
Machine Learning for Embedded Autonomous Systems
Model training and fine-tuning
Exporting models to ONNX
Optimizing inference with TensorRT
Deploying models on embedded NVIDIA Jetson platforms
Implementing custom CUDA layers for unsupported operations
PyTorch -> ONNX -> TensorRT -> custom CUDA kernel to run that layer
CUDA Kernels and Vision Acceleration
GPU-accelerated image signal processing (ISP)
Camera rectification and distortion correction
Decompanding and dynamic range processing
Image denoising and filtering
Optimizing memory access and kernel performance
MAVLink Interfaces for Autonomous Systems
Using MAVLink as a control and telemetry interface
Designing a minimal command surface for autonomy
Command vs telemetry separation and authority
Message rates, timing, and update behavior
Failure modes, safety gates, and loss-of-link handling
Student Learning Outcomes
By the end of this curriculum, students will be able to:
Safely operate and configure a fully autonomous drone platform.
Work in a Linux + Docker environment on NVIDIA Jetson hardware.
Run real-time AI perception (YOLO + OpenCV) on live camera feeds.
Design, tune, and debug basic control loops (PID) for motion.
Implement a Kalman filter for target tracking and prediction.
Use MAVLink protocol for autonomous control of a drone from a companion computer
Integrate AI perception, estimation, and control into a working autonomy stack.
Use logged data (MCAP) to analyze, debug, and improve system performance.
Learn More
For more information about the curriculum or current development status,
please get in touch.