Test, compare and validate vision models in real outdoor conditions — before they reach deployment.

Why it exists

Many teams validate vision using clean datasets and short demos. Then deployment fails during lighting transitions, motion blur, seasonal changes, or low-contrast terrain.

AGVScanner helps you test models on real video streams and compare baseline vs adapted versions under identical conditions.

Compare models side-by-side

Run up to four pipelines in parallel to compare baseline and adapted models using the same input.

Test on real inputs

Use live camera, streams, or recorded video to see what happens in your real operating conditions.

Integrate with robotics systems

Designed for edge deployment and integration workflows (e.g., ROS2 and messaging pipelines).

Key capabilities

  • Up to four parallel perception pipelines
  • Baseline vs adapted model comparison
  • Segmentation and detection orchestration
  • Live camera, stream or file input
  • Edge deployment ready

What you can do with it

  • Compare a pretrained baseline with an environment-aligned version
  • Test segmentation and detection under motion and lighting transitions
  • Run multiple pipelines in parallel during field evaluation
  • Use it as a structured validation step before deployment

Want help using it in your project?

If you need structured ML preparation, stability evaluation, or integration into your robotics architecture — I collaborate with teams at this stage.