The Ultimate Guide: Convert CSV to SQL and SQL to CSV with Python & CLI Tools (2026)

Converting CSV files to SQL in python is a routine yet critical task for data engineers, analysts, and backend developers. Python offers multiple ways to achieve this efficiently, catering to different use cases — whether you want a quick solution, a production-ready pipeline, or a portable SQL script.

Method 1: Using pandas + SQLAlchemy (Recommended)

Python’s pandas library, combined with SQLAlchemy, provides a clean, production-ready approach to converting CSV files to SQL tables. Pandas handles reading CSVs, inferring data types, and processing large files in chunks, while SQLAlchemy provides connectivity to almost any database (SQLite, MySQL, PostgreSQL).

Install dependencies:

pip install pandas sqlalchemy pymysql psycopg2-binary

Example code:

import pandas as pd

from sqlalchemy import create_engine

# Read CSV

df = pd.read_csv(‘sales.csv’)

# Connect to SQLite

engine = create_engine(‘sqlite:///sales.db’)

# Convert CSV to SQL

df.to_sql(‘sales_table’, engine, if_exists=’replace’, index=False)

print(f”Loaded {len(df)} rows into sales_table”)

Key benefits:

  • Automatic type inference
  • Handles nulls, dates, and numeric conversions
  • Chunked processing for large CSVs

Method 2: Using Python’s csv module + SQLite3

For projects where you want zero external dependencies, Python’s built-in csv and sqlite3 modules work perfectly.

Example:

import csv

import sqlite3

conn = sqlite3.connect(‘data.db’)

cursor = conn.cursor()

with open(‘data.csv’, ‘r’) as f:

   reader = csv.DictReader(f)

   columns = reader.fieldnames

   cursor.execute(f”CREATE TABLE IF NOT EXISTS data_table ({‘, ‘.join([col+’ TEXT’ for col in columns])})”)

   for row in reader:

       placeholders = ‘, ‘.join([‘?’ for _ in columns])

       values = [row[col] for col in columns]

       cursor.execute(f”INSERT INTO data_table VALUES ({placeholders})”, values)

conn.commit()

conn.close()

Use case: Quick scripts, small datasets, SQLite-only projects.

Method 3: Generating a SQL File from CSV

Sometimes, you don’t want to connect directly to a database. You can generate a .sql file containing CREATE TABLE and INSERT statements, which can later be run on any database.

Example:

import csv

with open(‘data.csv’, ‘r’) as f:

   reader = csv.DictReader(f)

   columns = reader.fieldnames

   rows = list(reader)

with open(‘output.sql’, ‘w’) as sql:

   sql.write(f”CREATE TABLE IF NOT EXISTS data_table ({‘, ‘.join([col+’ TEXT’ for col in columns])});\n”)

   for row in rows:

       vals = ‘, ‘.join([f”‘{v.replace(chr(39), chr(39)*2)}'” for v in row.values()])

       sql.write(f”INSERT INTO data_table VALUES ({vals});\n”)

Use case: Review SQL before running, share scripts, version control, or production deployment.

Summary: The pandas + SQLAlchemy approach is best for most projects, csv + sqlite3 is zero-dependency, and generating SQL scripts provides portability and safety.

2. Python: Convert CSV File to SQL Database

Python makes it straightforward to take a CSV file and convert it into a full SQL database. This approach is common for ETL pipelines, analytics workflows, or migrating legacy data.

Step 1: Install Required Libraries

pip install pandas sqlalchemy pymysql psycopg2-binary

Step 2: Create a Database Connection

Python can connect to multiple database types:

from sqlalchemy import create_engine

# SQLite example

engine = create_engine(‘sqlite:///customers.db’)

# MySQL example

engine = create_engine(‘mysql+pymysql://user:password@localhost/customers_db’)

# PostgreSQL example

engine = create_engine(‘postgresql://user:password@localhost:5432/customers_db’)

Step 3: Read CSV and Convert

import pandas as pd

df = pd.read_csv(‘customers.csv’)

df.to_sql(‘customers’, engine, if_exists=’replace’, index=False)

print(f”{len(df)} rows inserted into ‘customers'”)

Step 4: Handling Large Files

For very large CSVs:

for chunk in pd.read_csv(‘large_customers.csv’, chunksize=10000):

   chunk.to_sql(‘customers’, engine, if_exists=’append’, index=False)

Step 5: Data Validation

Always verify:

from sqlalchemy import text

with engine.connect() as conn:

   count = conn.execute(text(“SELECT COUNT(*) FROM customers”)).scalar()

   print(f”Row count: {count}”)

Benefits:

  • Works with any relational database
  • Scales to large datasets
  • Allows type handling, chunking, and automation

3. Convert CSV to SQL in Python Using pandas

Using pandas is arguably the most beginner-friendly and scalable method for converting CSV to SQL. Pandas integrates seamlessly with SQLAlchemy, making data conversion a matter of a few lines of code.

Step 1: Read CSV

import pandas as pd

df = pd.read_csv(‘products.csv’)

print(df.head())

Step 2: Set Explicit Data Types

df[‘price’] = pd.to_numeric(df[‘price’], errors=’coerce’)

df[‘stock’] = df[‘stock’].astype(int)

df[‘created_at’] = pd.to_datetime(df[‘created_at’])

Step 3: Connect to Database

from sqlalchemy import create_engine

engine = create_engine(‘sqlite:///shop.db’)

Step 4: Convert CSV to SQL Table

df.to_sql(‘products’, engine, if_exists=’replace’, index=False)

Step 5: Chunking for Large Files

for i, chunk in enumerate(pd.read_csv(‘big_products.csv’, chunksize=10000)):

   chunk.to_sql(‘products’, engine, if_exists=’append’ if i>0 else ‘replace’, index=False)

   print(f”Chunk {i} loaded”)

Advantages:

  • Fast and clean
  • Handles large datasets
  • Works with SQLite, MySQL, PostgreSQL

4. Convert CSV to SQL File Using Python

Sometimes you need a portable SQL script instead of direct database insertion to convert cvs to sql file using python. This approach is especially useful for:

  • Sharing SQL scripts with developers or DBAs
  • Version controlling database changes
  • Preparing batch imports on remote servers

Step 1: Read CSV

import csv

with open(‘data.csv’, ‘r’) as f:

   reader = csv.DictReader(f)

   columns = reader.fieldnames

   rows = list(reader)

Step 2: Write CREATE TABLE Statement

col_defs = ‘, ‘.join([col + ‘ TEXT’ for col in columns])

with open(‘output.sql’, ‘w’) as sql:

   sql.write(f”CREATE TABLE IF NOT EXISTS data_table ({col_defs});\n”)

Step 3: Write INSERT Statements

for row in rows:

   vals = ‘, ‘.join([f”‘{v.replace(chr(39), chr(39)*2)}'” for v in row.values()])

   sql.write(f”INSERT INTO data_table VALUES ({vals});\n”)

Step 4: Test Output

sqlite3 test.db < output.sql

sqlite3 test.db “SELECT COUNT(*) FROM data_table;”

Advantages:

  • Fully portable
  • No database connection required
  • Can be reviewed before execution

5. Python: Convert SQL to CSV

Reversing the process — exporting SQL data to CSV — is essential for reporting, data analysis, or ETL pipelines.

Step 1: Connect to Database

from sqlalchemy import create_engine

engine = create_engine(‘sqlite:///mydb.db’)

Step 2: Query and Export Using pandas

import pandas as pd

df = pd.read_sql(‘SELECT * FROM sales’, engine)

df.to_csv(‘sales.csv’, index=False)

Step 3: Using csv.writer for No-dependency Approach

import csv

from sqlalchemy import text

with engine.connect() as conn:

   result = conn.execute(text(“SELECT * FROM sales”))

   with open(‘sales.csv’, ‘w’, newline=”) as f:

       writer = csv.writer(f)

       writer.writerow(result.keys())

       writer.writerows(result.fetchall())

Step 4: Handling Large Queries

for chunk in pd.read_sql(‘SELECT * FROM huge_table’, engine, chunksize=10000):

   chunk.to_csv(‘huge_table.csv’, mode=’a’, header=not os.path.exists(‘huge_table.csv’), index=False)

Benefits:

  • Easy integration with Python pipelines
  • Handles large datasets efficiently
  • Provides both pandas and standard Python options

6. Convert CSV to SQL from the Command Line

For developers and sysadmins, CLI tools provide a scriptable, fast way to convert CSV to SQL.

Tool 1: csvkit (csvsql)

pip install csvkit

csvsql –db sqlite:///output.db –insert data.csv

csvsql data.csv  # print SQL

Tool 2: SQLite3 Built-In Import

sqlite3 mydb.db << EOF

.mode csv

.import data.csv my_table

EOF

Tool 3: MySQL LOAD DATA INFILE

mysql -u root -p mydb << EOF

LOAD DATA LOCAL INFILE ‘data.csv’

INTO TABLE my_table

FIELDS TERMINATED BY ‘,’

OPTIONALLY ENCLOSED BY ‘”‘

LINES TERMINATED BY ‘\n’

IGNORE 1 LINES;

EOF

Automation Script Example

#!/bin/bash

DB=”sqlite:///output.db”

for file in data/*.csv; do

   tablename=$(basename “$file” .csv)

   csvsql –db “$DB” –insert –tables “$tablename” “$file”

done

Conclusion: CLI tools are ideal for automation, remote servers, or pipelines where Python GUIs are not an option.

1. Convert CSV to SQL in Python: 3 Easy Methods

Converting CSV files to SQL in python is a routine yet critical task for data engineers, analysts, and backend developers. Python offers multiple ways to achieve this efficiently, catering to different use cases — whether you want a quick solution, a production-ready pipeline, or a portable SQL script.

Method 1: Using pandas + SQLAlchemy (Recommended)

Python’s pandas library, combined with SQLAlchemy, provides a clean, production-ready approach to converting CSV files to SQL tables. Pandas handles reading CSVs, inferring data types, and processing large files in chunks, while SQLAlchemy provides connectivity to almost any database (SQLite, MySQL, PostgreSQL).

Install dependencies:

pip install pandas sqlalchemy pymysql psycopg2-binary

Example code:

import pandas as pd

from sqlalchemy import create_engine

# Read CSV

df = pd.read_csv(‘sales.csv’)

# Connect to SQLite

engine = create_engine(‘sqlite:///sales.db’)

# Convert CSV to SQL

df.to_sql(‘sales_table’, engine, if_exists=’replace’, index=False)

print(f”Loaded {len(df)} rows into sales_table”)

Key benefits:

  • Automatic type inference
  • Handles nulls, dates, and numeric conversions
  • Chunked processing for large CSVs

Method 2: Using Python’s csv module + SQLite3

For projects where you want zero external dependencies, Python’s built-in csv and sqlite3 modules work perfectly.

Example:

import csv

import sqlite3

conn = sqlite3.connect(‘data.db’)

cursor = conn.cursor()

with open(‘data.csv’, ‘r’) as f:

   reader = csv.DictReader(f)

   columns = reader.fieldnames

   cursor.execute(f”CREATE TABLE IF NOT EXISTS data_table ({‘, ‘.join([col+’ TEXT’ for col in columns])})”)

   for row in reader:

       placeholders = ‘, ‘.join([‘?’ for _ in columns])

       values = [row[col] for col in columns]

       cursor.execute(f”INSERT INTO data_table VALUES ({placeholders})”, values)

conn.commit()

conn.close()

Use case: Quick scripts, small datasets, SQLite-only projects.

Method 3: Generating a SQL File from CSV

Sometimes, you don’t want to connect directly to a database. You can generate a .sql file containing CREATE TABLE and INSERT statements, which can later be run on any database.

Example:

import csv

with open(‘data.csv’, ‘r’) as f:

   reader = csv.DictReader(f)

   columns = reader.fieldnames

   rows = list(reader)

with open(‘output.sql’, ‘w’) as sql:

   sql.write(f”CREATE TABLE IF NOT EXISTS data_table ({‘, ‘.join([col+’ TEXT’ for col in columns])});\n”)

   for row in rows:

       vals = ‘, ‘.join([f”‘{v.replace(chr(39), chr(39)*2)}'” for v in row.values()])

       sql.write(f”INSERT INTO data_table VALUES ({vals});\n”)

Use case: Review SQL before running, share scripts, version control, or production deployment.

Summary: The pandas + SQLAlchemy approach is best for most projects, csv + sqlite3 is zero-dependency, and generating SQL scripts provides portability and safety.

2. Python: Convert CSV File to SQL Database

Python makes it straightforward to take a CSV file and convert it into a full SQL database. This approach is common for ETL pipelines, analytics workflows, or migrating legacy data.

Step 1: Install Required Libraries

pip install pandas sqlalchemy pymysql psycopg2-binary

Step 2: Create a Database Connection

Python can connect to multiple database types:

from sqlalchemy import create_engine

# SQLite example

engine = create_engine(‘sqlite:///customers.db’)

# MySQL example

engine = create_engine(‘mysql+pymysql://user:password@localhost/customers_db’)

# PostgreSQL example

engine = create_engine(‘postgresql://user:password@localhost:5432/customers_db’)

Step 3: Read CSV and Convert

import pandas as pd

df = pd.read_csv(‘customers.csv’)

df.to_sql(‘customers’, engine, if_exists=’replace’, index=False)

print(f”{len(df)} rows inserted into ‘customers'”)

Step 4: Handling Large Files

For very large CSVs:

for chunk in pd.read_csv(‘large_customers.csv’, chunksize=10000):

   chunk.to_sql(‘customers’, engine, if_exists=’append’, index=False)

Step 5: Data Validation

Always verify:

from sqlalchemy import text

with engine.connect() as conn:

   count = conn.execute(text(“SELECT COUNT(*) FROM customers”)).scalar()

   print(f”Row count: {count}”)

Benefits:

  • Works with any relational database
  • Scales to large datasets
  • Allows type handling, chunking, and automation

3. Convert CSV to SQL in Python Using pandas

Using pandas is arguably the most beginner-friendly and scalable method for converting CSV to SQL. Pandas integrates seamlessly with SQLAlchemy, making data conversion a matter of a few lines of code.

Step 1: Read CSV

import pandas as pd

df = pd.read_csv(‘products.csv’)

print(df.head())

Step 2: Set Explicit Data Types

df[‘price’] = pd.to_numeric(df[‘price’], errors=’coerce’)

df[‘stock’] = df[‘stock’].astype(int)

df[‘created_at’] = pd.to_datetime(df[‘created_at’])

Step 3: Connect to Database

from sqlalchemy import create_engine

engine = create_engine(‘sqlite:///shop.db’)

Step 4: Convert CSV to SQL Table

df.to_sql(‘products’, engine, if_exists=’replace’, index=False)

Step 5: Chunking for Large Files

for i, chunk in enumerate(pd.read_csv(‘big_products.csv’, chunksize=10000)):

   chunk.to_sql(‘products’, engine, if_exists=’append’ if i>0 else ‘replace’, index=False)

   print(f”Chunk {i} loaded”)

Advantages:

  • Fast and clean
  • Handles large datasets
  • Works with SQLite, MySQL, PostgreSQL

4. Convert CSV to SQL File Using Python

Sometimes you need a portable SQL script instead of direct database insertion to convert cvs to sql file using python. This approach is especially useful for:

  • Sharing SQL scripts with developers or DBAs
  • Version controlling database changes
  • Preparing batch imports on remote servers

Step 1: Read CSV

import csv

with open(‘data.csv’, ‘r’) as f:

   reader = csv.DictReader(f)

   columns = reader.fieldnames

   rows = list(reader)

Step 2: Write CREATE TABLE Statement

col_defs = ‘, ‘.join([col + ‘ TEXT’ for col in columns])

with open(‘output.sql’, ‘w’) as sql:

   sql.write(f”CREATE TABLE IF NOT EXISTS data_table ({col_defs});\n”)

Step 3: Write INSERT Statements

for row in rows:

   vals = ‘, ‘.join([f”‘{v.replace(chr(39), chr(39)*2)}'” for v in row.values()])

   sql.write(f”INSERT INTO data_table VALUES ({vals});\n”)

Step 4: Test Output

sqlite3 test.db < output.sql

sqlite3 test.db “SELECT COUNT(*) FROM data_table;”

Advantages:

  • Fully portable
  • No database connection required
  • Can be reviewed before execution

5. Python: Convert SQL to CSV

Reversing the process — exporting SQL data to CSV — is essential for reporting, data analysis, or ETL pipelines.

Step 1: Connect to Database

from sqlalchemy import create_engine

engine = create_engine(‘sqlite:///mydb.db’)

Step 2: Query and Export Using pandas

import pandas as pd

df = pd.read_sql(‘SELECT * FROM sales’, engine)

df.to_csv(‘sales.csv’, index=False)

Step 3: Using csv.writer for No-dependency Approach

import csv

from sqlalchemy import text

with engine.connect() as conn:

   result = conn.execute(text(“SELECT * FROM sales”))

   with open(‘sales.csv’, ‘w’, newline=”) as f:

       writer = csv.writer(f)

       writer.writerow(result.keys())

       writer.writerows(result.fetchall())

Step 4: Handling Large Queries

for chunk in pd.read_sql(‘SELECT * FROM huge_table’, engine, chunksize=10000):

   chunk.to_csv(‘huge_table.csv’, mode=’a’, header=not os.path.exists(‘huge_table.csv’), index=False)

Benefits:

  • Easy integration with Python pipelines
  • Handles large datasets efficiently
  • Provides both pandas and standard Python options

6. Convert CSV to SQL from the Command Line

For developers and sysadmins, CLI tools provide a scriptable, fast way to convert CSV to SQL.

Tool 1: csvkit (csvsql)

pip install csvkit

csvsql –db sqlite:///output.db –insert data.csv

csvsql data.csv  # print SQL

Tool 2: SQLite3 Built-In Import

sqlite3 mydb.db << EOF

.mode csv

.import data.csv my_table

EOF

Tool 3: MySQL LOAD DATA INFILE

mysql -u root -p mydb << EOF

LOAD DATA LOCAL INFILE ‘data.csv’

INTO TABLE my_table

FIELDS TERMINATED BY ‘,’

OPTIONALLY ENCLOSED BY ‘”‘

LINES TERMINATED BY ‘\n’

IGNORE 1 LINES;

EOF

Automation Script Example

#!/bin/bash

DB=”sqlite:///output.db”

for file in data/*.csv; do

   tablename=$(basename “$file” .csv)

   csvsql –db “$DB” –insert –tables “$tablename” “$file”

done

Conclusion: CLI tools are ideal for automation, remote servers, or pipelines where Python GUIs are not an option.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top