Stephen
Hackett-Delaney
Full Stack Software Engineer
When NULL Breaks Everything
February 16, 2026
A brand had uploaded their full product catalog — hundreds of SKUs, sizing, images, the lot. The import ran without errors. The database had rows. But when they opened the app, the catalog was empty.
No error messages. No missing data warnings. Just... nothing.
How Did We Get Here?
We were building Maura, a fashion platform connecting brands with stylists. Brands onboard by uploading a CSV of their product catalog, which gets parsed into clothing templates and variants in Supabase.
The import pipeline was straightforward:
- Brand uploads CSV via the web onboarding flow
- Backend parses rows into
clothing_templateandclothing_template_variantrecords - Mobile app queries templates grouped by department
The CSV came from the brand's merchandising team. We assumed it matched our expected schema. It didn't — and the way it failed was invisible.
The Symptoms
Two problems surfaced simultaneously, which made debugging harder:
Problem 1: All variants collapsed into a single template.
The catalog had dozens of distinct products, but the app showed one template with hundreds of variants underneath it. The grouping logic uses style_number to determine which variants belong to which template. The CSV had a column called Style Number (with a space), but the parser expected style_number (with an underscore). Since the parser couldn't find the expected column, every row got the same fallback value — grouping everything together.
Problem 2: The entire catalog was invisible.
Even the single mega-template didn't show up. The app's catalog view filters by department:
const { data } = await supabase
.from("clothing_template")
.select("*, variants:clothing_template_variant(*)")
.eq("brand_id", brandId)
.eq("department", selectedDepartment);The CSV had Department as a column header, but the casing mismatch meant it was never mapped. Every row was inserted with department: NULL. The filter for "Womenswear" or "Menswear" matched zero rows.
NULL didn't throw an error. It just silently excluded everything.
The Process Shortcomings
We had no CSV validation before import. The pipeline went straight from "parse CSV" to "insert rows." There was no step that said "check that the columns we need actually exist in this file."
This is the kind of validation you skip because it feels unnecessary — "of course the CSV will have the right columns." But CSVs come from humans, from Excel exports, from other systems. Column names get capitalized, spaced, abbreviated, or just plain misspelled.
We had no post-import verification. After the import, we didn't check: "Did the data we just inserted actually make sense?" A simple count of templates with NULL department would have flagged the issue immediately.
We assumed column mapping was case-sensitive without telling anyone. The brand team had no way to know that Style Number and style_number were different in our system. We didn't document the expected format or provide a template CSV.
The Fix
The immediate fix was re-importing with corrected column headers. But the lasting fix was adding a validation step:
const REQUIRED_HEADERS = [
"style_number",
"name",
"department",
"sku",
"size",
"color",
] as const;
function validateCsvHeaders(headers: string[]): {
valid: boolean;
missing: string[];
suggestions: Record<string, string>;
} {
const normalized = headers.map((h) => h.toLowerCase().trim());
const missing: string[] = [];
const suggestions: Record<string, string> = {};
for (const required of REQUIRED_HEADERS) {
if (!normalized.includes(required)) {
missing.push(required);
// Check for common variations
const fuzzy = normalized.find(
(h) =>
h.replace(/[\s_-]/g, "") === required.replace(/[\s_-]/g, "")
);
if (fuzzy) {
suggestions[required] = headers[normalized.indexOf(fuzzy)];
}
}
}
return {
valid: missing.length === 0,
missing,
suggestions,
};
}Run this before any parsing begins. If it fails, show the user exactly what's wrong:
Missing required columns: style_number, department
Did you mean?
"style_number" → found "Style Number" (rename or re-export)
"department" → found "Department" (rename or re-export)The fuzzy matching catches the most common issue — casing and spacing differences — and tells the user how to fix it without guessing.
What I Learned
NULL is not an error — it's a silence. SQL treats NULL as "unknown," which means equality checks against NULL always return false. WHERE department = 'Womenswear' will never match a NULL row. This is fundamental SQL behavior, but it's easy to forget when you're focused on application code.
Validate at the boundary. The CSV is external input. Every assumption about its structure is a potential failure point. Header validation takes minutes to implement and saves hours of debugging invisible data.
Make failures loud. If the import had rejected rows with NULL required fields instead of silently inserting them, we'd have caught this during onboarding — not after the brand wondered why their catalog was empty.
This is Part 1 of the Debugging series — lessons from a single debugging session that touched data validation, database scripts, caching, iOS simulators, and auth flows.
© 2026. All rights reserved