Manufacturing Quality Inspector
Computer vision inspection trained on your line, not on labeled defects.

The problem
being solved
A manufacturing line producing 1,000+ units per shift relies on human visual inspection at 2-3 checkpoints. Research shows inspection error rates increase approximately 20% after 30 minutes of continuous inspection due to fatigue. Inspectors miss hairline cracks, microscopic contamination, and slight color variations that cause field failures.
Cognex AI-powered vision systems analyze thousands of parts per minute seeing details as small as microns. Their edge learning and deep learning handle tasks too complex for rule-based machine vision. Instrumental's anomaly detector identifies novel defects without initial training, learning from operator feedback over sessions.
Mid-market manufacturers cannot justify enterprise vision systems from Cognex or Keyence that require significant capital and machine vision engineering expertise.
How this
agent works
This agent connects to industrial cameras at inspection points. Computer vision models trained on your product's defect taxonomy: surface defects (scratches, dents, discoloration), dimensional deviations (warping, out-of-spec), assembly errors (missing components, misalignment), and contamination.
The system uses anomaly detection: it learns what "good" looks like from examples of passing products, then flags deviations. As operators confirm or dismiss flags, the model refines detection boundaries. It can identify novel defect types not in the original training data.
Each unit gets pass/fail with per-defect confidence scores. Failed units are automatically diverted. Complete inspection records — timestamped images, classifications, disposition — for ISO 9001 and IATF 16949 traceability.
The agent uses a PyTorch-trained anomaly detection model exported to ONNX Runtime for low-latency inference at the line edge — no pre-labeled defect dataset required. It builds a statistical baseline from 500-1,000 conforming product images, then flags deviations from that distribution. A FastAPI inference server handles camera acquisition via the GigE Vision or USB3 Vision SDK, writes pass/fail signals to the PLC, and logs timestamped results to PostgreSQL. Sensitivity thresholds are tuned during the first two weeks using operator feedback on edge cases.
- 01
Anomaly-Based Detection
The model learns what conforming product looks like from passing units, then flags statistical deviations — which means novel defect types are caught without being labeled in advance. Sensitivity is tunable per SKU so tolerance bands reflect your actual dimensional and finish specs. Operator accept/reject feedback during the first two weeks refines the model's decision boundary without retraining from scratch.
- 02
Multi-Defect Classification
A secondary classifier categorizes confirmed anomalies by type: surface finish, dimensional deviation, assembly error, or contamination. Each category carries an independent confidence threshold, so a surface scratch and a missing component can trigger different downstream actions. Classification output is stored per unit and available for SPC analysis.
- 03
Production Line Integration
The inference server interfaces with GigE Vision and USB3 Vision cameras via the Vision SDK and sends pass/fail signals to PLC systems over Modbus TCP or OPC-UA. Rejected units trigger a diversion actuator at line speed with no added cycle time. Camera trigger timing, exposure, and lighting configuration are handled during the 3-5 week setup engagement.
- 04
Traceability Documentation
Every inspected unit generates a timestamped record: raw image, anomaly heatmap, defect classification, confidence score, and final disposition. Records are stored in PostgreSQL and structured to support ISO 9001 and IATF 16949 audit requirements. Retention policy, report format, and export frequency are configurable at deployment.
Build this agent
for your workflow.
We custom-build each agent to fit your data, your rules, and your existing systems.
Free 30-min scoping call