A leading automotive OEM
Predictive analytics for production quality.
Results
40%
Fewer false positives
2.5×
Faster triage
12 wk
End to end
Zero
New tools to learn
The challenge
False-positive overload in defect detection. The existing quality system flagged everything — real defects, sensor noise, calibration drift. Production line managers spent more time triaging alerts than fixing actual problems.
Every shift started with hundreds of alerts, most of them noise. The team had learned to ignore the system entirely — which meant real defects slipped through at the same rate as before the system existed. A quality tool nobody trusts is worse than no tool at all.
Our approach
Data audit
Mapped every sensor feed, quality checkpoint, and historical defect record. Found three data sources nobody knew existed — including a calibration log that explained 60% of the false positives.
Model development
Built classification models in Python/scikit-learn trained on actual defect outcomes, not just threshold breaches. Tuned for precision over recall — fewer alerts, higher confidence.
Dashboard integration
Connected predictions to a Power BI layer plant managers already used. No new tool to learn. Alerts now show probability scores, not just pass/fail.
Validation & handover
Ran the model alongside the old system for 4 weeks. Documented every discrepancy. Trained the quality team to retrain the model when product specs change.
The turning point was discovering a sensor calibration log buried in a shared drive that nobody had linked to the quality data. It explained why one production line had 3× the alert rate of others — the sensors were miscalibrated after a maintenance cycle, and the quality system had been faithfully flagging the drift as defects for four months.
Tech stack
Power BIAzure Data LakePythonscikit-learnPredictive modelsData Engineering Have a similar challenge?
20 minutes. We'll tell you what we'd do, what it costs, and whether you actually need us.