Overview
PolePadAI started as a hackathon project, but the part I cared most about was the technical system underneath it. The real challenge was not just building a demo that looked convincing. It was stitching together a computer-vision pipeline, OCR, backend APIs, hosting, and a frontend workflow into something that could process inspection data in a way that actually felt useful.
The Problem
Utility pole inspection workflows involve a lot of repetitive manual effort. Inspectors need to review imagery, identify relevant assets, pull out readable labels, and flag issues like vegetation concerns or condition risks. That process is slow, and once enough photos pile up, consistency becomes a real problem.
We wanted a system that could reduce the time spent manually documenting what was already visible in the image set.
Detection Pipeline
The core of the system was a YOLO-based object-detection pipeline that I trained to identify the relevant visual targets in utility-pole imagery. Once the model found those regions, the system could crop them and pass the outputs into later processing stages.
That created a clean flow:
- Ingest inspection images
- Detect the important pole and tag regions
- Crop candidate areas for OCR and review
- Send the results back into the application for operator validation
The goal was to reduce human effort, not remove humans from the process entirely.
OCR and Heuristics
Once the relevant regions were isolated, EasyOCR handled extraction of visible tag text. This gave the system a way to convert image content into structured data without forcing a user to manually transcribe identifiers every time.
For vegetation-related checks, we supplemented the AI stack with OpenCV heuristics. That part was useful because not every problem needed a full deep-learning pass. In some cases, classic image-processing logic was a faster and more practical way to flag suspicious conditions around the inspection target.
Backend and Hosting
I engineered the FastAPI backend to manage the application workflow and serve the analysis layer. One of the more interesting deployment choices was serving the system from my home server over Cloudflare, which let me move quickly without waiting on a more formal infrastructure path during the hackathon timeline.
That backend was responsible for:
- Receiving and organizing inspection submissions
- Running or coordinating the analysis pipeline
- Returning structured results for review
- Supporting the frontend workflow used in the live demo
Frontend and Data Layer
The user-facing side was built with Next.js, and AWS handled the data layer for storing inspection-related records. That meant we were trying to coordinate frontend UX, backend APIs, AI processing, and cloud persistence all at the same time, which was both the strength and the biggest risk in the build.
The system worked, but it also made the coordination challenge obvious. Once you have AI outputs, database state, file handling, and multiple app surfaces all landing at once, weak contracts between components become expensive very quickly.
Lessons Learned
The biggest lesson was about interface discipline. We overscoped the integration surface by trying to land AWS, Next.js, and multiple AI components in the same sprint without locking down the boundaries early enough.
If I were rebuilding it, I would define the technical contracts between the pipeline, API, and frontend first, then let each part move independently. That would make the system faster to stabilize and easier to extend.
Even with that lesson, PolePadAI is one of my favorite projects because it sits right at the intersection of applied AI, backend engineering, infrastructure, and real-world utility.