FY26 Monitoring
FF
Possible R&D

Automated Defect Detection Model

Automated Defect Detection Model

FY26 · 57 evidence items · 57% claim readiness

Project summary

FlowForge is building a computer vision model to detect inconsistent surface defects under variable lighting and material conditions, removing the need for manual visual inspection on the production line.

Technical uncertainty

The team could not determine whether a convolutional model could reliably distinguish genuine surface defects from acceptable surface variation across the range of material types and lighting configurations present on the production line.

Working hypothesis

If a computer vision model were trained on labelled defect images across multiple material classes, it may maintain acceptable precision and recall under the variable lighting conditions present in the production environment.

Claim readiness

0%

Key figures

Est. eligible spend
$146K
Evidence items
57
Confidence
Medium
Documentation gaps
2

Evidence gaps

  • Missing model comparison report
  • Unclear cost allocation

Experimentation timeline

  1. Jan 2026

    Defect detection model project initiated

    Jira
  2. Feb 2026

    Initial model evaluation — insufficient precision

    Drive
  3. Mar 2026

    Dataset augmentation and retraining commenced

    GitHub
  4. Apr 2026

    V2 model tested — false positive rate too high

    Jira
  5. May 2026

    Custom CNN with lighting normalisation in progress

    GitHub

Experimentation iterations

1

Off-the-shelf defect classifier

Outcome

Insufficient precision on variable surface textures

Evidence

Model Eval V1 — Google Drive

2

Fine-tuned ResNet with augmented dataset

Outcome

Improved recall but high false positive rate under low light

Evidence

Model Eval V2, Jira DEF-74

3

Custom CNN with lighting normalisation layer

Outcome

Ongoing — performance under evaluation

Evidence

Model Eval V3, GitHub PR #301

Supporting evidence (2 items)

View all evidence

Team involvement

  • Maya ChenML Engineer
    65% R&D time42 signals
  • Anika PatelManufacturing Lead
    45% R&D time28 signals
  • Liam BrooksProduct Engineer
    28% R&D time19 signals

Eligible cost breakdown

$146K

Total estimated eligible expenditure

Engineering salaries
$95K
medium
Contractor costs
$28K
medium
Cloud / compute
$15K
medium
Dataset & tooling
$8K
low

Confidence breakdown

Technical uncertainty clarity72%
Experimentation history65%
Evidence completeness58%
Cost allocation42%
Documentation quality55%
Compliance confidence61%

Missing information

View all gaps
  • Upload model comparison report showing performance across versions
  • Clarify cost allocation between ML compute and routine operations
  • Link dataset acquisition invoices to project activity
  • Confirm team involvement percentages for ML engineers

Suggested next steps

  • 1Upload model performance comparison report
  • 2Confirm cost allocation with finance team
  • 3Link compute invoices to project
  • 4Submit Project 2 to Canopy for review

Figures update automatically as new evidence is connected. Your Canopy specialist will confirm all positions during claim preparation.