Skip to the content.
Research-grade CV portfolio project

AI vehicle damage inspection.

A polished end-to-end computer vision system for classifying car damage across 12 real-world categories.

The project trains and evaluates transfer-learning models with FastAI + PyTorch, selects ResNet50 as the best backbone at 78.03% top-1 accuracy, and deploys the model through HuggingFace Spaces with a GitHub Pages web demo.

4,500+Labeled images
12Damage categories
ResNet50Best model
78.03%Top-1 accuracy
FastAIPyTorch training
HF SpacesDeployment

Problem context

Why this project matters

Vehicle damage inspection is a high-volume visual workflow where consistency, speed, and traceability matter. This project frames the model as an applied inspection assistant rather than a generic image classifier.

Insurance

Claim triage

First-pass classification can route obvious cases quickly while flagging ambiguous images for manual review.

Operations

Fleet inspection

Standardized category predictions help document vehicle condition across repeated inspection workflows.

Repair

Shop intake

Structured predictions make repair intake faster by turning uploaded photos into searchable damage labels.

Dataset categories

12-class vehicle damage taxonomy

The dataset covers cosmetic, structural, environmental, tire, glass, and no-damage cases across 4,500+ labeled images.

Car DentPanel deformation
Car ScratchSurface paint damage
Cracked WindshieldGlass fracture
Broken BumperImpact damage
Flat TireTire failure
Flood DamageWater exposure
Fire DamageBurn and smoke damage
Hail DamageRepeated dents
Broken Side MirrorMirror assembly damage
Rust/CorrosionOxidation and decay
Vandalism/KeyedIntentional surface marks
No DamageNegative class

Evaluation

Results at a glance

ResNet50 is the strongest model, but the detailed class-level view shows where the real inspection difficulty sits: subtle scratches, ambiguous dents, and context-dependent damage.

78.03%ResNet50 top-1
77.93%ResNet34 top-1
72.18%EfficientNet-B0 top-1

Model comparison

Model comparison chart

ResNet50 narrowly outperforms ResNet34. The small gap suggests the task is constrained by visual ambiguity and label overlap, not only backbone capacity.

Model comparison takeaways

  • ResNet50 is selected as the best model at 78.03% top-1 accuracy.
  • ResNet34 is nearly tied at 77.93%, making it a strong lightweight baseline.
  • EfficientNet-B0 underperforms in this training setup at 72.18%.
  • The outcome points toward data quality, class definition, and image ambiguity as important next levers.

Per-class accuracy

Per-class accuracy chart

Visually distinctive classes are strongest. Thin or ambiguous surface-level defects remain the hardest classes.

Class-level interpretation

Strongest classes: Broken Side Mirror (98.9%), Flat Tire (95.5%), Rust/Corrosion (94.7%), Broken Bumper (89.3%), Cracked Windshield (89.0%).

Most challenging classes: Car Scratch (58.6%) and Fire Damage (66.7%). These categories vary heavily in scale, lighting, texture, and context.

This is the research value of the project: the presentation does not stop at top-line accuracy; it shows failure modes that matter for deployment.

Confusion matrix

ResNet50 confusion matrix

The confusion matrix exposes semantically meaningful errors between visually similar damage classes.

Top confusion pairs

  • Car Scratch to Vandalism/Keyed surface lines
  • Car Dent to Broken Bumper deformation
  • Hail Damage to Car Dent small dents
  • Fire Damage to Flood Damage context
  • No Damage to Scratch / Dent subtle defects

Methodology and pipeline

From raw images to deployed model

The workflow follows a practical applied ML pipeline: collect, clean, augment, train, evaluate, export, and deploy.

1

Collect

Build a labeled image dataset across 12 vehicle condition categories.

2

Prepare

Split data, resize images, normalize inputs, and apply FastAI augmentations.

3

Train

Fine-tune ResNet34, ResNet50, and EfficientNet-B0 with PyTorch-backed FastAI.

4

Evaluate

Compare model accuracy, inspect per-class behavior, and analyze confusion patterns.

5

Deploy

Export the best model and serve inference through Gradio on HuggingFace Spaces.

Pipeline overview

Training and deployment pipeline diagram

Repository

Project structure

The repository separates notebooks, deployment code, model artifacts, generated documentation assets, and the GitHub Pages frontend.

car-damage-classifier/
|-- deployment/
|   |-- app.py
|   `-- requirements.txt
|-- models/
|   `-- CarDamageClassifierV1.pkl
|-- notebooks/
|   |-- data_preparation.ipynb
|   `-- TrainingAndCleaning.ipynb
|-- docs/
|   |-- index.md
|   |-- car_damage.html
|   `-- assets/
|       |-- charts/
|       |-- confusion-matrices/
|       |-- samples/
|       `-- sections/
|-- scripts/
|   `-- generate_charts.py
`-- README.md

Deployment

Static portfolio frontend, hosted ML backend

GitHub Pages presents the project and demo UI. HuggingFace Spaces hosts the Gradio inference backend and model runtime.

HuggingFace Spaces

Interactive Gradio deployment for real-time image classification using the exported FastAI model.

Space: wrezachow/car-damage-classifier

Model: models/CarDamageClassifierV1.pkl

Open Model / Space

GitHub Pages

Research-style landing page plus a browser demo that calls the Space API through @gradio/client.

Landing: docs/index.md

Demo: docs/car_damage.html

Open Live Demo

Quick start

Run locally

Use the deployment requirements and exported FastAI model to launch the Gradio app on your machine.

git clone https://github.com/wrezachow/car-damage-classifier.git
cd car-damage-classifier

python -m venv .venv
.venv\Scripts\Activate.ps1

pip install -r deployment/requirements.txt
python deployment/app.py

Tech stack

Applied ML tooling

The stack is intentionally pragmatic: mature transfer-learning tooling for training and simple hosted deployment for inference access.

Python FastAI PyTorch ResNet50 EfficientNet-B0 Jupyter Gradio HuggingFace Spaces GitHub Pages @gradio/client