If you want the broader context, I wrote about the blocked-row remediation flow here:
[I added a blocked-row remediation loop to my CSV intake console]
https://dev.to/fastapier/i-added-a-blocked-row-remediation-loop-to-my-csv-intake-console-4p3k
Yesterday, I focused on one important problem:
blocked rows should be repairable inside the console.
That part now works.
But there was still another real operational problem.
Different companies do not send the same CSV.
One file uses:
contact_namecompany_nameemail
Another uses:
nameaccountcustomer_email
And status values drift too:
activeworkingcurrentpaused
That is how import flows become brittle.
So in this update, I added company-specific CSV import profiles.
Now the console can interpret different CSV shapes on purpose instead of assuming one universal format.
What changed
The preview flow now accepts a profile.
In this pass, I added a legacy_sales profile that translates:
-
name→contact_name -
account→company_name -
customer_email→email -
mobile_number→phone -
lifecycle→status
It also normalizes values like:
-
working→active -
current→active -
paused→inactive
So the system can absorb schema drift before a run is applied.
A bad row should still be blocked.
A different dialect should not.
In this example, the profile correctly understood the file structure, but one row still had an invalid email.
That is the behavior I want.
The file shape was accepted.
The actually broken row was stopped.
After fixing the invalid email inside the console, the run could continue and be applied safely.
That is the important part.
Not just detecting messy input.
Recovering from it inside the same staged workflow.
Why this matters
A lot of CSV tools treat every unfamiliar file as “bad data.”
That is not always true.
Sometimes the row is broken.
Sometimes the source is just speaking a different dialect.
Those are different problems.
Profiles help the intake flow distinguish between:
- truly broken rows
- different-but-valid source formats
That makes the system much more useful in real operational environments.
What this project is becoming
I am not trying to build a magical CSV uploader.
I am trying to build a defensive intake engine that can:
- stage messy operational data
- understand company-specific CSV dialects
- explain what will happen
- let operators repair what is actually broken
- apply changes intentionally
- preserve an audit trail
That is a much stronger shape than “upload a CSV and hope for the best.”
If you work on messy CSV onboarding, intake remediation, or company-specific import workflows, feel free to reach out:



Top comments (0)