JSON to SQL: Generate CREATE TABLE + INSERT Online
Turn a JSON array into a typed CREATE TABLE plus one INSERT per row, with type inference, nested-field handling, and dialect-specific output — all without your data ever leaving the browser.
Why developers convert JSON to SQL
You get a JSON dump from an API. Maybe it is a 5,000-row export of orders, a webhook log, or a one-off scrape. You want to load it into Postgres, run a few GROUP BY queries, and move on. The friction is the schema: you need column names, types, primary keys, and one INSERT statement per row. Writing that DDL by hand for 30 keys is busywork. A JSON-to-SQL converter automates exactly that step.
The tools that solve this — Beekeeper Studio, JSONLint, AI2SQL, TableConvert, SQLBook, SQLizer — all do roughly the same thing. Paste an array of objects, pick a dialect, get back a .sql file you can pipe straight into psql or mysql. The differences are in nested-field handling, where the conversion runs (your browser vs. their server), and how much they charge above a free tier.
The use cases that come up most often: seeding a test database with realistic data, importing a one-off API export, prototyping a schema before committing to a migration, and migrating between document stores and relational stores. None of these are exotic. They just happen often enough that doing the schema work by hand is a tax you pay every few weeks.
How JSON to SQL conversion works
The converter does four passes over your input:
- Validate. Strict JSON parsing. Trailing commas, unquoted keys, or single quotes will fail at this step. If the source is hand-edited or scraped, run it through a formatter first to catch the syntax errors before the converter does.
- Collect keys. Walk every object in the array and union all top-level keys. The result is your column list. Objects that are missing a key get
NULLfor that column. - Infer types. For each column, scan every value and pick the narrowest type that fits all observed values. Details in the next section.
- Emit DDL + DML. Build a CREATE TABLE statement with the inferred types, then one INSERT per object. The good converters also escape single quotes inside string values and convert JavaScript
true/falseto the dialect-correct boolean literal.
That is the whole pipeline. The hard part is the type inference — it is where converters differ in quality and where you will see the most surprises in the output.
Type inference rules you should know
Type inference is heuristic, not exact. The converter has no schema to work from, so it guesses based on the values it sees. Here are the rules that hold across most converters, with their failure modes:
- Integers. If every value for a key is an integer with no decimal point, the column becomes
INTEGER(orBIGINTif any value exceeds 2,147,483,647). Be aware that IDs serialized as strings ("id": "1234") will be classified asTEXT, not integer — JSON does not distinguish numeric strings from numbers. - Decimals. One decimal value anywhere in the column triggers
NUMERIC,DECIMAL, orREAL. The converter cannot infer precision and scale from the data, so it usually picks the dialect default. If you needDECIMAL(10,2)for money, plan to edit the DDL by hand. - Booleans. Literal
true/falsebecomeBOOLEAN."true"/"false"as strings will not be coerced — they becomeTEXT. - Strings. The catch-all. Becomes
TEXTin Postgres and SQLite,VARCHAR(255)orVARCHAR(MAX)in MySQL and SQL Server. The converter will usually not measure the longest string and tune the size — pick a dialect that supports unboundedTEXTif you are not sure. - Dates. ISO 8601 strings (
"2026-05-11T07:00:00Z") get inferred asTIMESTAMPby the better converters. Plain dates ("2026-05-11") becomeDATE. Custom formats like"11/05/2026"stay asTEXT. - NULL. A
nullJSON value is fine; the inference ignores it and decides the type based on the non-null values. A column with every valuenulldefaults toTEXT.
Always skim the generated CREATE TABLE before running it. A 30-second review catches the cases where the converter guessed wrong — usually IDs-as-strings or numeric columns that need precision tuning.
Handling nested objects and arrays
JSON nests. SQL does not. Every converter has to decide what to do with a key whose value is an object or an array. Three strategies are common:
- Store as JSON/JSONB. The default in modern converters for Postgres and MySQL. The column type is
JSONB(Postgres) orJSON(MySQL 5.7+), and you query it later with->/->>in Postgres orJSON_EXTRACTin MySQL. The lowest-friction option, and usually the right choice for prototyping. - Stringify. The fallback for SQLite and SQL Server. The nested value is serialized back to a JSON string and stored in a
TEXTcolumn. Queryable in SQLite with thejson_extract()function if the JSON1 extension is enabled (it is by default in most builds since 2015). - Normalize. Split the nested array into a child table with a foreign key back to the parent. JSONLint, SQLBook, and Sonra all support this — the others do not. Useful when you actually need to
JOINon the nested data, not just retrieve it.
Pick strategy 1 for Postgres or MySQL. Pick strategy 2 for SQLite. Reach for strategy 3 only when you know the nested data is queried by itself often enough to justify the extra schema. Premature normalization is the most common mistake in this workflow — the JSONB column is almost always good enough.
Dialect differences: Postgres, MySQL, SQLite
The same JSON array produces meaningfully different DDL depending on the target dialect:
- PostgreSQL. Native
JSONB,BOOLEAN,TIMESTAMP,TEXTwith no length cap. UseSERIALorGENERATED ALWAYS AS IDENTITYfor auto-increment IDs. Most converters target Postgres best because the type system maps cleanly to JSON. - MySQL/MariaDB. Has
JSONsince 5.7 (no functional indexes in MariaDB until 10.6).BOOLEANis an alias forTINYINT(1).TEXTis capped at 65KB; useLONGTEXTfor larger strings. Auto-increment usesAUTO_INCREMENT. - SQLite. Dynamic typing —
INTEGER,REAL,TEXT,BLOB,NUMERIC. Booleans collapse toINTEGER0/1. Auto-increment usesINTEGER PRIMARY KEY AUTOINCREMENT. The simplest target. - SQL Server.
NVARCHARfor Unicode strings,BITfor boolean,DATETIME2for timestamps,NVARCHAR(MAX)for JSON. No native JSONB — query withJSON_VALUEandOPENJSONinstead.
The dialect picker in the converter is the single most consequential setting. Switch it and re-generate; do not hand-edit the output.
Why browser-local matters for this workflow
JSON dumps frequently contain things you do not want on a third-party server. Order records with customer email addresses. API responses with auth tokens. User analytics with personally-identifying fields. Any kind of staging-environment data that mirrors production.
Most online converters process the JSON server-side. The data hits their backend, gets parsed, the SQL gets generated, and the result is returned. Whether the input is logged, cached, or shared with third-party analytics depends on the tool and the privacy policy. For sensitive data, the safer default is a converter that runs entirely in your browser via JavaScript or WebAssembly — the JSON gets parsed in your tab and the SQL is generated locally, with nothing crossing the network.
You can verify this in DevTools: open the Network tab, paste the JSON, hit convert, and watch for any POST request carrying your input. A truly browser-local tool will show zero outbound requests during conversion. If a request fires, the data is going somewhere. This is the same check you can run on any of our JSON tools — including the JSON formatter and JSON to CSV converter — and you will see no network activity during the actual conversion step.
A reproducible JSON-to-SQL workflow
Five steps that consistently produce clean output:
- Validate the JSON. Run the source through a validator first. If the JSON is hand-edited or scraped from HTML, expect at least one trailing comma or unquoted key.
- Format and inspect. Pretty-print the JSON. Skim the first 5 objects and the last 5. Confirm the key set is consistent across rows. Inconsistent keys are fine — they just become nullable columns — but it is worth knowing before you generate DDL.
- Pick the dialect. Match the target database. Switching dialects later means regenerating the entire DDL, not editing it.
- Generate, review, edit. Look at the CREATE TABLE first. Tune type widths for money, rename ID-string columns to
VARCHAR(64)if they should not be integers, and add a primary key constraint if the converter did not infer one. - Run on a scratch database. Never run the generated SQL directly against production. Pipe it into a local Postgres or SQLite, run a few
SELECTqueries, and confirm row counts match the JSON array length before you trust it.
The five-step loop takes about as long as writing the DDL by hand for a 5-column table — and a fraction of the time for anything wider than that.
Your data never leaves your browser
PDF Mavericks processes JSON in your tab using WebAssembly. No file is uploaded, no input is logged, no third party sees your data.
Frequently asked questions
What does a JSON to SQL converter actually generate?
Two things: a CREATE TABLE statement that maps each top-level JSON key to a typed column, and one INSERT row per object in your JSON array. Most converters let you pick the SQL dialect (PostgreSQL, MySQL, SQLite, SQL Server) so the output matches the database you are about to load it into. You paste an array of objects, you get a ready-to-run .sql file back.
How does the tool decide what column type to use?
It scans every value for each key across the full array. If every value parses as an integer, the column becomes INTEGER or BIGINT. If anything is a decimal, it becomes NUMERIC or REAL. Mixed strings and numbers fall back to TEXT or VARCHAR. Booleans become BOOLEAN (Postgres/MySQL) or INTEGER 0/1 (SQLite). ISO date strings can be inferred as DATE or TIMESTAMP when every value matches the pattern. The wider your sample, the more accurate the inference.
What happens to nested JSON objects or arrays?
Three common strategies. PostgreSQL: store nested objects as JSONB so you can query them with the -> and ->> operators later. MySQL 5.7+ has a JSON column type that works the same way. SQLite stores nested values as TEXT (most builds also support the JSON1 extension for querying). If you need fully normalized tables instead of embedded JSON, run JSONLint or Sonra's normalizer first — they split nested arrays into child tables with foreign keys.
Is it safe to paste production data into an online JSON to SQL tool?
Depends on the tool. JSONLint, AI2SQL, and most cloud tools send your data to their server for processing — that means the JSON is logged, cached, or stored at least temporarily. For sensitive data (PII, credentials, business records), use a tool that explicitly processes the JSON in your browser via WebAssembly or JavaScript, where the data never leaves your device. Beekeeper Studio's converter is one example of client-side processing. Always check the privacy policy.
Which SQL dialects are commonly supported?
PostgreSQL, MySQL/MariaDB, SQLite, and SQL Server cover roughly 90% of converters. PostgreSQL gets the best treatment because JSONB makes nested data trivial. Oracle support is rarer. BigQuery and Snowflake support exists in a few tools (SQLBook, Sonra) but you should verify the generated DDL against their type quirks — INTEGER means INT64 in BigQuery and NUMBER(38,0) in Snowflake.
How do I handle a 50MB JSON file?
Browser-local converters typically handle 50-100MB before the UI gets sluggish — the limit is how much JSON your browser can parse into memory. For larger files, stream-parse with a CLI: jq for filtering, sqlizer-cli or sonra for bulk conversion. Or split the JSON into chunks of 10K records, convert each separately, then concatenate the INSERT statements. The CREATE TABLE only needs to run once.
Do I need to clean the JSON first?
Run it through a formatter and a validator first if the source is hand-edited or scraped — trailing commas, unquoted keys, or single quotes will break the converter. Strict JSON is the input contract. If the JSON came from an API response, it is almost always already strict and you can skip the cleanup step.