GEOAI CAPABILITY·IMAGE FUSION & REGISTRATION

Optical-SAR Alignment

Cross-modal image registration that fuses optical and Synthetic Aperture Radar imagery into a unified, co-registered geospatial layer for all-weather, day-and-night intelligence.

OVERVIEW

What is Optical-SAR Alignment?

Optical-SAR Alignment is the process of geometrically co-registering images acquired by optical sensors (visible / near-infrared cameras) with images acquired by Synthetic Aperture Radar (SAR) sensors. The two modalities have complementary strengths, optical delivers high-resolution detail and rich spectral signatures, while SAR penetrates clouds and operates day or night.

When the two are aligned at sub-pixel accuracy, every downstream model, change detection, classification, anomaly scoring, can reason across modalities pixel-for-pixel. Sentient targets sub-pixel accuracy (typically below 0.5 pixel RMSE) across more than 150 satellite sources, so analysts get a single, dependable geospatial layer regardless of which sensor produced any given scene.

METHODOLOGY

RPC-Based Alignment Pipeline

STEP 01

RPC Metadata Extraction

Rational Polynomial Coefficients (RPCs) encode each sensor’s geometric model. Sentient parses RPC metadata across 150+ satellite sources to bootstrap a precise initial ground-to-image mapping for both the optical and SAR scene.

STEP 02

RPC Refinement with Tie Points

Automatically extracted tie points correct residual biases introduced by orbit determination and attitude measurement errors. This refinement step typically improves geolocation accuracy from ~5 m CE90 down to sub-meter levels.

STEP 03

DEM-Assisted Ortho-Projection

Both images are ortho-projected using their refined RPCs combined with a Digital Elevation Model (Copernicus 30 m GLO or ALOS World 3D). This removes parallax and terrain-induced distortion before fusion.

STEP 04

Resampling & Fusion Output

Bilinear or cubic resampling produces the final co-registered stack. Accuracy is validated against check points distributed across the scene, with sub-pixel RMSE targets (below 0.5 px) before any product is released downstream.

APPLICATIONS

Where it shows up

Disaster Response

Combine the immediate, weather-independent SAR view with archival optical imagery to map flood extents, damaged structures, and access routes within hours of an event.

Change Detection

Pixel-aligned multi-modal stacks let temporal models reason about what truly changed on the ground, not what changed because the sensor moved.

Land Use Classification

Fuse spectral signatures from optical bands with SAR backscatter to disambiguate land cover classes that look identical to any single sensor.

Maritime Surveillance

Track vessels through cloud cover and night using SAR, then re-acquire identity, type, and activity context from cleanly aligned optical scenes.