The Problem with Flight Planning

The current SAR drone workflow looks like this: drive to the scene, unpack the drone, open the manufacturer's flight planning app, import map data, manually place waypoints one by one at the correct spacing. That spacing requires calculating camera FOV coverage at the planned altitude — most pilots just guess. Then configure camera settings. Configure safety parameters. Upload via the manufacturer's proprietary connection protocol. Run pre-flight checks. Launch. Time from arrival to airborne: 15–30 minutes.

In search and rescue, the first hour is the golden hour. Survival probability drops measurably with every minute. Spending half of that hour on flight planning is not acceptable. The problem is not the drone hardware. The problem is the software workflow between the operator's intent and the drone's execution.

Overwatch Core compresses that workflow to five steps and under 60 seconds of hands-on time. Here is exactly how.

Step 1 — Open the Planner

Open any browser on any device — laptop, tablet, phone. Navigate to the Overwatch web application. No install. No login wall for field operations. The interface loads: a dark tactical map built on satellite imagery, high contrast, designed for outdoor screen readability with reduced brightness. White text on dark backgrounds. Orange accents for actionable elements. Minimal visual noise.

The map tiles are cached for offline use in areas with no cellular connectivity. This matters because SAR operations happen in places where cell towers do not reach — mountain valleys, coastal cliffs, dense forest. The planner works in the field, not just in an office with WiFi. Pre-cache the tiles for your operating area before deployment, and the entire planning interface functions without any network connection.

Step 2 — Draw the Search Area

Tap vertices on the map to define the search polygon. Three taps for a triangle. Four for a rectangle. As many as needed for complex terrain-following boundaries — an L-shaped riverbank, a narrow ridgeline, the perimeter of a lake.

As each vertex is placed, the boustrophedon grid pattern renders as an overlay in real time. The sweep lines appear on the map as they would be flown — parallel passes aligned to the polygon's principal axis, clipped to the boundary, connected by turn segments at each end. Mission statistics update live: area in square meters, total flight distance, estimated flight time, number of sweep lines, effective coverage percentage. The operator sees exactly what the drone will fly before committing to anything.

The polygon can be edited after placement. Drag a vertex and the grid recomputes instantly. Add a vertex to refine the boundary and the sweep lines adjust. The feedback loop between intent and flight plan is immediate — no "recalculate" button, no loading spinner, no round trip to a server.

Step 3 — Configure Parameters

Three sliders.

Altitude (10–50m AGL). Higher altitude covers more ground per sweep but reduces the resolution of the detection model. At 30m, the camera footprint is approximately 41m wide — a person is roughly 15–20 pixels tall in each frame, which is within the SSD MobileNet v2 model's detection capability. At 50m, a person drops to approximately 10 pixels. Still detectable in good conditions, but marginal. The operator chooses based on terrain openness and detection priority.

Overlap (0–60%). The percentage of adjacent sweep strips that overlap. More overlap means each ground point is imaged from multiple frames at different angles, increasing the probability that at least one frame captures a detectable view of a target. Default is 20%. For dense vegetation where a person might only be visible from certain angles, an operator might push this to 40% or higher — accepting longer flight time for higher detection probability.

Speed (2–8 m/s). Faster covers more area per battery charge but reduces the number of frames captured per ground point. At 4 m/s and 30m altitude, the detection pipeline running at ~5 FPS captures approximately one frame every 0.8 meters of ground travel — dense enough for reliable detection. At 8 m/s, that drops to one frame every 1.6 meters. Acceptable for open terrain, risky for cluttered environments.

The planner recomputes the grid instantly with each slider adjustment. Flight time, distance, and coverage statistics update in real time. The operator makes tradeoffs with full visibility into their consequences.

Step 4 — Generate and Upload

Click "Generate Mission." The system does not produce a waypoint list. It packages a complete AirSDK project: the flight supervisor — a 7-state finite state machine handling takeoff, transit, grid search, return, landing, and emergency states — the vision detection service running TFLite SSD MobileNet v2 inference on every camera frame, and the safety monitor watching battery, GPS, and communication status as an independent thread. All compiled and configured for the specific mission parameters the operator just defined.

The flight plan is encoded as relative displacement vectors — sequences of {dx, dy, dz, dpsi} from each waypoint to the next, executed via moveBy commands. This is the GPS-denied navigation architecture: the drone does not need continuous GPS to fly the mission. It uses whatever position estimation is available and tracks displacement from the last known good fix.

Connect the laptop to the ANAFI UKR's WiFi network. Click "Upload." The mission archive transfers in seconds. No tethered connection. No SD card. No USB cable. No proprietary ground station software. The drone now has everything it needs to execute the entire mission autonomously — flight path, detection model, safety logic, and alert publication — all onboard.

Step 5 — Launch

Click "Launch." The drone takes off, climbs to mission altitude, transits to the grid start point, and begins the sweep. The flight supervisor manages all state transitions autonomously. The hybrid GPS/VIO navigation system tracks position using whatever signals are available — full GPS, degraded GPS, or pure visual-inertial odometry in GPS-denied environments.

The detection pipeline processes every camera frame at ~5 FPS during the grid search phase. Detections above the confidence threshold are geolocated using the drone's position estimate and camera geometry, cropped from the source frame, and published as alerts. They appear on the operator's map in real time over the WiFi link, or queue onboard for delivery when the communication link recovers after a dropout.

The operator watches alerts, not video. No second person staring at a screen. No attention fatigue. The system tells the operator when it finds something and where.

Total hands-on time from opening the app to the drone being airborne: under 60 seconds. Total time including walking to the launch point, unpacking the drone, powering on, and connecting WiFi: under 5 minutes.

Why 60 Seconds Matters

The argument from the first post in this series was straightforward: the survival probability curve drops with every minute of delay. A missing person in cold water has a survival window measured in minutes. A hiker with a medical emergency has a window measured in hours. In both cases, the clock starts before the SAR team arrives on scene.

A system that shaves 15–25 minutes off deployment time is not an incremental improvement. It is a different category of response capability. Consider two teams arriving at the same scene at the same time. Team A uses a conventional flight planning workflow: 20 minutes to configure waypoints, upload, and launch. Team B uses Overwatch Core: 5 minutes from vehicle door to drone airborne. Team B's drone has been searching for 15 minutes before Team A's drone leaves the ground. At 4 m/s sweep speed covering a 40m-wide strip, that is 15 minutes of flight time — approximately 3.6 km of sweep distance — covering roughly 144,000 square meters of ground.

That is not efficiency. That is 144,000 square meters of search area covered while the other team is still placing waypoints.

A team that can have a drone searching within 5 minutes of arrival operates in a fundamentally different operational envelope than one that spends 30 minutes on flight planning. The difference is not about convenience or user experience polish. The difference is outcomes — whether a person is found in the window where finding them matters.