Include headers when using SELECT INTO OUTFILE? - mysql
Is it possible to include the headers somehow when using the MySQL INTO OUTFILE?
You'd have to hard code those headers yourself. Something like:
SELECT 'ColName1', 'ColName2', 'ColName3'
UNION ALL
SELECT ColName1, ColName2, ColName3
FROM YourTable
INTO OUTFILE '/path/outfile'
The solution provided by Joe Steanelli works, but making a list of columns is inconvenient when dozens or hundreds of columns are involved. Here's how to get column list of table my_table in my_schema.
-- override GROUP_CONCAT limit of 1024 characters to avoid a truncated result
set session group_concat_max_len = 1000000;
select GROUP_CONCAT(CONCAT("'",COLUMN_NAME,"'"))
from INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'my_table'
AND TABLE_SCHEMA = 'my_schema'
order BY ORDINAL_POSITION
Now you can copy & paste the resulting row as first statement in Joe's method.
For complex select with ORDER BY I use the following:
SELECT * FROM (
SELECT 'Column name #1', 'Column name #2', 'Column name ##'
UNION ALL
(
// complex SELECT statement with WHERE, ORDER BY, GROUP BY etc.
)
) resulting_set
INTO OUTFILE '/path/to/file';
This will alow you to have ordered columns and/or a limit
SELECT 'ColName1', 'ColName2', 'ColName3'
UNION ALL
SELECT * from (SELECT ColName1, ColName2, ColName3
FROM YourTable order by ColName1 limit 3) a
INTO OUTFILE '/path/outfile';
You can use prepared statement with lucek's answer and export dynamically table with columns name in CSV :
--If your table has too many columns
SET GLOBAL group_concat_max_len = 100000000;
--Prepared statement
SET #SQL = ( select CONCAT('SELECT * INTO OUTFILE \'YOUR_PATH\' FIELDS TERMINATED BY \',\' OPTIONALLY ENCLOSED BY \'"\' ESCAPED BY \'\' LINES TERMINATED BY \'\\n\' FROM (SELECT ', GROUP_CONCAT(CONCAT("'",COLUMN_NAME,"'")),' UNION select * from YOUR_TABLE) as tmp') from INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = 'YOUR_TABLE' AND TABLE_SCHEMA = 'YOUR_SCHEMA' order BY ORDINAL_POSITION );
--Execute it
PREPARE stmt FROM #SQL;
EXECUTE stmt;
Thank lucek.
I simply make 2 queries, first to get query output (limit 1) with column names (no hardcode, no problems with Joins, Order by, custom column names, etc), and second to make query itself, and combine files into one CSV file:
CSVHEAD=`/usr/bin/mysql $CONNECTION_STRING -e "$QUERY limit 1;"|head -n1|xargs|sed -e "s/ /'\;'/g"`
echo "\'$CSVHEAD\'" > $TMP/head.txt
/usr/bin/mysql $CONNECTION_STRING -e "$QUERY into outfile '${TMP}/data.txt' fields terminated by ';' optionally enclosed by '\"' escaped by '' lines terminated by '\r\n';"
cat $TMP/head.txt $TMP/data.txt > $TMP/data.csv
This is an alternative cheat if you are familiar with Python or R, and your table can fit into memory.
Import the SQL table into Python or R and then export from there as a CSV and you'll get the column names as well as the data.
Here's how I do it using R, requires the RMySQL library:
db <- dbConnect(MySQL(), user='user', password='password', dbname='myschema', host='localhost')
query <- dbSendQuery(db, "select * from mytable")
dataset <- fetch(query, n=-1)
write.csv(dataset, 'mytable_backup.csv')
It's a bit of a cheat but I found this was a quick workaround when my number of columns was too long to use the concat method above. Note: R will add a 'row.names' column at the start of the CSV so you'll want to drop that if you do need to rely on the CSV to recreate the table.
I faced similar problem while executing mysql query on large tables in NodeJS. The approach which I followed to include headers in my CSV file is as follows
Use OUTFILE query to prepare file without headers
SELECT * INTO OUTFILE [FILE_NAME] FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED
BY '\"' LINES TERMINATED BY '\n' FROM [TABLE_NAME]
Fetch column headers for the table used in point 1
select GROUP_CONCAT(CONCAT(\"\",COLUMN_NAME,\"\")) as col_names from
INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = [TABLE_NAME] AND TABLE_SCHEMA
= [DATABASE_NAME] ORDER BY ORDINAL_POSITION
Append the column headers to the file created in step 1 using prepend-file npm package
Execution of each step was controlled using promises in NodeJS.
I think if you use a UNION it will work:
select 'header 1', 'header 2', ...
union
select col1, col2, ... from ...
I don't know of a way to specify the headers with the INTO OUTFILE syntax directly.
Since the 'include-headers' functionality doesn't seem to be build-in yet, and most "solutions" here need to type the columns names manually, and/or don't even take joins into account, I'd recommand to get around the problem.
The best alternative I found so far is using a decent tool (I use HeidiSQL).
Put your request, select the grid, just right click and export to a file. It got all necessary options for a clean export, ans should handle most needs.
In the same idea, user3037511's approach works fine, and can be automated easily.
Just launch your request with some command line to get your headers. You may get the data with a SELECT INTO OUTFILE... or by running your query without the limit, yours to choose.
Note that output redirect to a file works like a charm on both Linux AND Windows.
This makes me want to highlight that 80% of the time, when I want to use SELECT FROM INFILE or SELECT INTO OUTFILE, I end-up using something else due to some limitations (here, the absence of a 'headers options', on an AWS-RDS, the missing rights, and so on.)
Hence, I don't exactly answer to the op's question... but it should answer his needs :)
EDIT : and to actually answer his question : no
As of 2017-09-07, you just can't include headers if you stick with the SELECT INTO OUTFILE command :|
The easiest way is to hard code the columns yourself to better control the output file:
SELECT 'ColName1', 'ColName2', 'ColName3'
UNION ALL
SELECT ColName1, ColName2, ColName3
FROM YourTable
INTO OUTFILE '/path/outfile'
Actually you can make it work even with an ORDER BY.
Just needs some trickery in the order by statement - we use a case statement and replace the header value with some other value that is guaranteed to sort first in the list (obviously this is dependant on the type of field and whether you are sorting ASC or DESC)
Let's say you have three fields, name (varchar), is_active (bool), date_something_happens (date), and you want to sort the second two descending:
select
'name'
, 'is_active' as is_active
, date_something_happens as 'date_something_happens'
union all
select name, is_active, date_something_happens
from
my_table
order by
(case is_active when 'is_active' then 0 else is_active end) desc
, (case date when 'date' then '9999-12-30' else date end) desc
So, if all the columns in my_table are a character data type, we can combine the top answers (by Joe, matt and evilguc) together, to get the header added automatically in one 'simple' SQL query, e.g.
select * from (
(select column_name
from information_schema.columns
where table_name = 'my_table'
and table_schema = 'my_schema'
order by ordinal_position)
union all
(select * // potentially complex SELECT statement with WHERE, ORDER BY, GROUP BY etc.
from my_table)) as tbl
into outfile '/path/outfile'
fields terminated by ',' optionally enclosed by '"' escaped by '\\'
lines terminated by '\n';
where the last couple of lines make the output csv.
Note that this may be slow if my_table is very large.
an example from my database
table name sensor with colums (id,time,unit)
select ('id') as id, ('time') as time, ('unit') as unit
UNION ALL
SELECT * INTO OUTFILE 'C:/Users/User/Downloads/data.csv'
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY '\n'
FROM sensor
If you are using MySQL Workbench:
Select all the columns from the SCHEMAS tab -> Right Click -> Copy to
Clipboard -> Name
Paste it in any text editor and, Replace " ` " with " ' "
Copy it back and use it in your UNION query (as mentioned in the accepted
answer):
SELECT [Paste your text here]
UNION ALL
SELECT *
FROM table_name
INTO OUTFILE 'file_path'
I was writing my code in PHP, and I had a bit of trouble using concat and union functions, and also did not use SQL variables, any ways I got it to work, here is my code:
//first I connected to the information_scheme DB
$headercon=mysqli_connect("localhost", "USERNAME", "PASSWORD", "information_schema");
//took the healders out in a string (I could not get the concat function to work, so I wrote a loop for it)
$headers = '';
$sql = "SELECT column_name AS columns FROM `COLUMNS` WHERE table_schema = 'YOUR_DB_NAME' AND table_name = 'YOUR_TABLE_NAME'";
$result = $headercon->query($sql);
while($row = $result->fetch_row())
{
$headers = $headers . "'" . $row[0] . "', ";
}
$headers = substr("$headers", 0, -2);
// connect to the DB of interest
$con=mysqli_connect("localhost", "USERNAME", "PASSWORD", "YOUR_DB_NAME");
// export the results to csv
$sql4 = "SELECT $headers UNION SELECT * FROM YOUR_TABLE_NAME WHERE ... INTO OUTFILE '/output.csv' FIELDS TERMINATED BY ','";
$result4 = $con->query($sql4);
Here is a way to get the header titles from the column names dynamically.
/* Change table_name and database_name */
SET #table_name = 'table_name';
SET #table_schema = 'database_name';
SET #default_group_concat_max_len = (SELECT ##group_concat_max_len);
/* Sets Group Concat Max Limit larger for tables with a lot of columns */
SET SESSION group_concat_max_len = 1000000;
SET #col_names = (
SELECT GROUP_CONCAT(QUOTE(`column_name`)) AS columns
FROM information_schema.columns
WHERE table_schema = #table_schema
AND table_name = #table_name);
SET #cols = CONCAT('(SELECT ', #col_names, ')');
SET #query = CONCAT('(SELECT * FROM ', #table_schema, '.', #table_name,
' INTO OUTFILE \'/tmp/your_csv_file.csv\'
FIELDS ENCLOSED BY \'\\\'\' TERMINATED BY \'\t\' ESCAPED BY \'\'
LINES TERMINATED BY \'\n\')');
/* Concatenates column names to query */
SET #sql = CONCAT(#cols, ' UNION ALL ', #query);
/* Resets Group Contact Max Limit back to original value */
SET SESSION group_concat_max_len = #default_group_concat_max_len;
PREPARE stmt FROM #sql;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;
I would like to add to the answer provided by Sangam Belose. Here's his code:
select ('id') as id, ('time') as time, ('unit') as unit
UNION ALL
SELECT * INTO OUTFILE 'C:/Users/User/Downloads/data.csv'
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY '\n'
FROM sensor
However, if you have not set up your "secure_file_priv" within the variables, it may not work. For that, check the folder set on that variable by:
SHOW VARIABLES LIKE "secure_file_priv"
The output should look like this:
mysql> show variables like "%secure_file_priv%";
+------------------+------------------------------------------------+
| Variable_name | Value |
+------------------+------------------------------------------------+
| secure_file_priv | C:\ProgramData\MySQL\MySQL Server 8.0\Uploads\ |
+------------------+------------------------------------------------+
1 row in set, 1 warning (0.00 sec)
You can either change this variable or change the query to output the file to the default path showing.
MySQL alone isn't enough to do this simply. Below is a PHP script that will output columns and data to CSV.
Enter your database name and tables near the top.
<?php
set_time_limit( 24192000 );
ini_set( 'memory_limit', '-1' );
setlocale( LC_CTYPE, 'en_US.UTF-8' );
mb_regex_encoding( 'UTF-8' );
$dbn = 'DB_NAME';
$tbls = array(
'TABLE1',
'TABLE2',
'TABLE3'
);
$db = new PDO( 'mysql:host=localhost;dbname=' . $dbn . ';charset=UTF8', 'root', 'pass' );
foreach( $tbls as $tbl )
{
echo $tbl . "\n";
$path = '/var/lib/mysql/' . $tbl . '.csv';
$colStr = '';
$cols = $db->query( 'SELECT COLUMN_NAME AS `column` FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = "' . $tbl . '" AND TABLE_SCHEMA = "' . $dbn . '"' )->fetchAll( PDO::FETCH_COLUMN );
foreach( $cols as $col )
{
if( $colStr ) $colStr .= ', ';
$colStr .= '"' . $col . '"';
}
$db->query(
'SELECT *
FROM
(
SELECT ' . $colStr . '
UNION ALL
SELECT * FROM ' . $tbl . '
) AS sub
INTO OUTFILE "' . $path . '"
FIELDS TERMINATED BY ","
ENCLOSED BY "\""
LINES TERMINATED BY "\n"'
);
exec( 'gzip ' . $path );
print_r( $db->errorInfo() );
}
?>
You'll need this to be the directory you'd like to output to. MySQL needs to have the ability to write to the directory.
$path = '/var/lib/mysql/' . $tbl . '.csv';
You can edit the CSV export options in the query:
INTO OUTFILE "' . $path . '"
FIELDS TERMINATED BY ","
ENCLOSED BY "\""
LINES TERMINATED BY "\n"'
At the end there is an exec call to GZip the CSV.
I had no luck with any of these, so after finding a solution, I wanted to add it to the prior answers. Python==3.8.6 MySQL==8.0.19
(Forgive my lack of SO formatting foo. Somebody please clean up.)
Note a couple of things:
First, the query to return column names is unforgiving of punctuation. Using ` backticks or leaving out ' quote around the 'schema_name' and 'table_name' will throw an "unknown column" error.
WHERE TABLE_SCHEMA = 'schema' AND TABLE_NAME = 'table'
Second, the column header names return as a single-entity tuple with all the column names concatenated in one quoted string. Convert to quoted list was easy, but not intuitive (for me at least).
headers_list = headers_result[0].split(",")
Third, cursor must be buffered or the "lazy" thing will not fetch your results as you need them. For very large tables, memory could be an issue. Perhaps chunking would solve that problem.
cur = db.cursor(buffered=True)
Last, all types of UNION attempts yielded errors for me. By zipping the whole mess into a list of dicts, it became trivial to write to a csv, using csv.DictWriter.
headers_sql = """
SELECT
GROUP_CONCAT(CONCAT(COLUMN_NAME) ORDER BY ORDINAL_POSITION)
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_SCHEMA = 'schema' AND TABLE_NAME = 'table';
""""
cur = db.cursor(buffered=True)
cur.execute(header_sql)
headers_result = cur.fetchone()
headers_list = headers_result[0].split(",")
rows_sql = """ SELECT * FROM schema.table; """"
data = cur.execute(rows_sql)
data_rows = cur.fetchall()
data_as_list_of_dicts = [dict(zip(headers_list, row)) for row in data_rows]
with open(csv_destination_file, 'w', encoding='utf-8') as destination_file_opened:
dict_writer = csv.DictWriter(destination_file_opened, fieldnames=headers_list)
dict_writer.writeheader() for dict in dict_list:
dict_writer.writerow(dict)
Solution using python but no need to install a python package to read sql files if you already use another tool.
If you are not familiar with python you can run the python codes in a colab notebook, all the required packages are already installed. It automates Matt and Joe's solutions.
Firstly execute this SQL script to get a csv with all table names :
SELECT TABLE_NAME
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_TYPE = 'BASE TABLE' AND TABLE_SCHEMA='your_schema'
INTO OUTFILE 'C:/ProgramData/MySQL/MySQL Server 8.0/Uploads/tables.csv';
Then move tables.csv to a suitable directory and execute this python code after having replaced 'path_to_tables' and 'your_schema'. It will generate a sql script to export all tables headers:
import pandas as pd
import os
tables = pd.read_csv('tables.csv',header = None)[0]
text_file = open("export_headers.sql", "w")
schema = 'your_schema'
sql_output_path = 'C:/ProgramData/MySQL/MySQL Server 8.0/Uploads/'
for table in tables :
path = os.path.join(sql_output_path,'{}_header.csv'.format(table))
string = "(select GROUP_CONCAT(COLUMN_NAME)\nfrom INFORMATION_SCHEMA.COLUMNS\nWHERE TABLE_NAME = '{}'\nAND TABLE_SCHEMA = '{}'\norder BY ORDINAL_POSITION)\nINTO OUTFILE '{}';".format(table,schema,path)
n = text_file.write(string)
n = text_file.write('\n\n')
text_file.close()
Then execute this python code which will generate a sql script to export the values of all tables:
text_file = open("export_values.sql", "w")
for table in tables :
path = os.path.join(sql_output_path,'{}.csv'.format(table))
string = "SELECT * FROM {}.{}\nINTO OUTFILE '{}';".format(schema,table,path)
n = text_file.write(string)
n = text_file.write('\n\n')
text_file.close()
Execute the two generated sql scripts and move the header csvs and values csvs in directories of your choice.
Then execute this last python code :
#Respectively the path to the headers csvs, the values csv and the path where you want to put the csvs with headers and values combined
headers_path, values_path, tables_path = '', '', ''
for table in tables :
header = pd.read_csv(os.path.join(headers_path,'{}_header.csv'.format(table)))
df = pd.read_csv(os.path.join(values_path,'{}.csv'.format(table)),names = header.columns,sep = '\t')
df.to_csv(os.path.join(tables_path,'{}.csv'.format(table)),index = False)
Then you got all your table exported in csv with the headers without having to write or copy paste all the tables and columns names.
Inspired by pivot table example from Rick James.
SET #CSVTABLE = 'myTableName',
#CSVBASE = 'databaseName',
#CSVFILE = '/tmp/filename.csv';
SET #sql = (SELECT CONCAT("SELECT ", GROUP_CONCAT(CONCAT('"', COLUMN_NAME, '"')), " UNION SELECT * FROM ", #CSVBASE, ".", #CSVTABLE) FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME=#CSVTABLE AND TABLE_SCHEMA=#CSVBASE);
prepare stmt from CONCAT(#sql, " INTO OUTFILE '", #CSVFILE, "' FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '\"' LINES TERMINATED BY '\\n';");
execute stmt;
It gets list of columns from INFORMATION_SCHEMA.COLUMNS table, and uses GROUP_CONCAT to prepare SELECT statement with list of strings with column names.
Next UNION is added with SELECT * FROM specified database.table - this creates query text that will output both column names and column values in result.
Now the statement is prepared using previously created query (stored in #sql variable), CSV output specific "things" are appended to query and finally statement is executed with execute stmt
SELECT 'ColName1', 'ColName2', 'ColName3'
UNION ALL
SELECT ColName1, ColName2, ColName3
FROM YourTable
INTO OUTFILE 'c:\\datasheet.csv' FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' LINES TERMINATED BY '\n'
Related
How to export data to .psv file in mysql with table headers? [duplicate]
Is it possible to include the headers somehow when using the MySQL INTO OUTFILE?
You'd have to hard code those headers yourself. Something like: SELECT 'ColName1', 'ColName2', 'ColName3' UNION ALL SELECT ColName1, ColName2, ColName3 FROM YourTable INTO OUTFILE '/path/outfile'
The solution provided by Joe Steanelli works, but making a list of columns is inconvenient when dozens or hundreds of columns are involved. Here's how to get column list of table my_table in my_schema. -- override GROUP_CONCAT limit of 1024 characters to avoid a truncated result set session group_concat_max_len = 1000000; select GROUP_CONCAT(CONCAT("'",COLUMN_NAME,"'")) from INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = 'my_table' AND TABLE_SCHEMA = 'my_schema' order BY ORDINAL_POSITION Now you can copy & paste the resulting row as first statement in Joe's method.
For complex select with ORDER BY I use the following: SELECT * FROM ( SELECT 'Column name #1', 'Column name #2', 'Column name ##' UNION ALL ( // complex SELECT statement with WHERE, ORDER BY, GROUP BY etc. ) ) resulting_set INTO OUTFILE '/path/to/file';
This will alow you to have ordered columns and/or a limit SELECT 'ColName1', 'ColName2', 'ColName3' UNION ALL SELECT * from (SELECT ColName1, ColName2, ColName3 FROM YourTable order by ColName1 limit 3) a INTO OUTFILE '/path/outfile';
You can use prepared statement with lucek's answer and export dynamically table with columns name in CSV : --If your table has too many columns SET GLOBAL group_concat_max_len = 100000000; --Prepared statement SET #SQL = ( select CONCAT('SELECT * INTO OUTFILE \'YOUR_PATH\' FIELDS TERMINATED BY \',\' OPTIONALLY ENCLOSED BY \'"\' ESCAPED BY \'\' LINES TERMINATED BY \'\\n\' FROM (SELECT ', GROUP_CONCAT(CONCAT("'",COLUMN_NAME,"'")),' UNION select * from YOUR_TABLE) as tmp') from INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = 'YOUR_TABLE' AND TABLE_SCHEMA = 'YOUR_SCHEMA' order BY ORDINAL_POSITION ); --Execute it PREPARE stmt FROM #SQL; EXECUTE stmt; Thank lucek.
I simply make 2 queries, first to get query output (limit 1) with column names (no hardcode, no problems with Joins, Order by, custom column names, etc), and second to make query itself, and combine files into one CSV file: CSVHEAD=`/usr/bin/mysql $CONNECTION_STRING -e "$QUERY limit 1;"|head -n1|xargs|sed -e "s/ /'\;'/g"` echo "\'$CSVHEAD\'" > $TMP/head.txt /usr/bin/mysql $CONNECTION_STRING -e "$QUERY into outfile '${TMP}/data.txt' fields terminated by ';' optionally enclosed by '\"' escaped by '' lines terminated by '\r\n';" cat $TMP/head.txt $TMP/data.txt > $TMP/data.csv
This is an alternative cheat if you are familiar with Python or R, and your table can fit into memory. Import the SQL table into Python or R and then export from there as a CSV and you'll get the column names as well as the data. Here's how I do it using R, requires the RMySQL library: db <- dbConnect(MySQL(), user='user', password='password', dbname='myschema', host='localhost') query <- dbSendQuery(db, "select * from mytable") dataset <- fetch(query, n=-1) write.csv(dataset, 'mytable_backup.csv') It's a bit of a cheat but I found this was a quick workaround when my number of columns was too long to use the concat method above. Note: R will add a 'row.names' column at the start of the CSV so you'll want to drop that if you do need to rely on the CSV to recreate the table.
I faced similar problem while executing mysql query on large tables in NodeJS. The approach which I followed to include headers in my CSV file is as follows Use OUTFILE query to prepare file without headers SELECT * INTO OUTFILE [FILE_NAME] FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '\"' LINES TERMINATED BY '\n' FROM [TABLE_NAME] Fetch column headers for the table used in point 1 select GROUP_CONCAT(CONCAT(\"\",COLUMN_NAME,\"\")) as col_names from INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = [TABLE_NAME] AND TABLE_SCHEMA = [DATABASE_NAME] ORDER BY ORDINAL_POSITION Append the column headers to the file created in step 1 using prepend-file npm package Execution of each step was controlled using promises in NodeJS.
I think if you use a UNION it will work: select 'header 1', 'header 2', ... union select col1, col2, ... from ... I don't know of a way to specify the headers with the INTO OUTFILE syntax directly.
Since the 'include-headers' functionality doesn't seem to be build-in yet, and most "solutions" here need to type the columns names manually, and/or don't even take joins into account, I'd recommand to get around the problem. The best alternative I found so far is using a decent tool (I use HeidiSQL). Put your request, select the grid, just right click and export to a file. It got all necessary options for a clean export, ans should handle most needs. In the same idea, user3037511's approach works fine, and can be automated easily. Just launch your request with some command line to get your headers. You may get the data with a SELECT INTO OUTFILE... or by running your query without the limit, yours to choose. Note that output redirect to a file works like a charm on both Linux AND Windows. This makes me want to highlight that 80% of the time, when I want to use SELECT FROM INFILE or SELECT INTO OUTFILE, I end-up using something else due to some limitations (here, the absence of a 'headers options', on an AWS-RDS, the missing rights, and so on.) Hence, I don't exactly answer to the op's question... but it should answer his needs :) EDIT : and to actually answer his question : no As of 2017-09-07, you just can't include headers if you stick with the SELECT INTO OUTFILE command :|
The easiest way is to hard code the columns yourself to better control the output file: SELECT 'ColName1', 'ColName2', 'ColName3' UNION ALL SELECT ColName1, ColName2, ColName3 FROM YourTable INTO OUTFILE '/path/outfile'
Actually you can make it work even with an ORDER BY. Just needs some trickery in the order by statement - we use a case statement and replace the header value with some other value that is guaranteed to sort first in the list (obviously this is dependant on the type of field and whether you are sorting ASC or DESC) Let's say you have three fields, name (varchar), is_active (bool), date_something_happens (date), and you want to sort the second two descending: select 'name' , 'is_active' as is_active , date_something_happens as 'date_something_happens' union all select name, is_active, date_something_happens from my_table order by (case is_active when 'is_active' then 0 else is_active end) desc , (case date when 'date' then '9999-12-30' else date end) desc
So, if all the columns in my_table are a character data type, we can combine the top answers (by Joe, matt and evilguc) together, to get the header added automatically in one 'simple' SQL query, e.g. select * from ( (select column_name from information_schema.columns where table_name = 'my_table' and table_schema = 'my_schema' order by ordinal_position) union all (select * // potentially complex SELECT statement with WHERE, ORDER BY, GROUP BY etc. from my_table)) as tbl into outfile '/path/outfile' fields terminated by ',' optionally enclosed by '"' escaped by '\\' lines terminated by '\n'; where the last couple of lines make the output csv. Note that this may be slow if my_table is very large.
an example from my database table name sensor with colums (id,time,unit) select ('id') as id, ('time') as time, ('unit') as unit UNION ALL SELECT * INTO OUTFILE 'C:/Users/User/Downloads/data.csv' FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' LINES TERMINATED BY '\n' FROM sensor
If you are using MySQL Workbench: Select all the columns from the SCHEMAS tab -> Right Click -> Copy to Clipboard -> Name Paste it in any text editor and, Replace " ` " with " ' " Copy it back and use it in your UNION query (as mentioned in the accepted answer): SELECT [Paste your text here] UNION ALL SELECT * FROM table_name INTO OUTFILE 'file_path'
I was writing my code in PHP, and I had a bit of trouble using concat and union functions, and also did not use SQL variables, any ways I got it to work, here is my code: //first I connected to the information_scheme DB $headercon=mysqli_connect("localhost", "USERNAME", "PASSWORD", "information_schema"); //took the healders out in a string (I could not get the concat function to work, so I wrote a loop for it) $headers = ''; $sql = "SELECT column_name AS columns FROM `COLUMNS` WHERE table_schema = 'YOUR_DB_NAME' AND table_name = 'YOUR_TABLE_NAME'"; $result = $headercon->query($sql); while($row = $result->fetch_row()) { $headers = $headers . "'" . $row[0] . "', "; } $headers = substr("$headers", 0, -2); // connect to the DB of interest $con=mysqli_connect("localhost", "USERNAME", "PASSWORD", "YOUR_DB_NAME"); // export the results to csv $sql4 = "SELECT $headers UNION SELECT * FROM YOUR_TABLE_NAME WHERE ... INTO OUTFILE '/output.csv' FIELDS TERMINATED BY ','"; $result4 = $con->query($sql4);
Here is a way to get the header titles from the column names dynamically. /* Change table_name and database_name */ SET #table_name = 'table_name'; SET #table_schema = 'database_name'; SET #default_group_concat_max_len = (SELECT ##group_concat_max_len); /* Sets Group Concat Max Limit larger for tables with a lot of columns */ SET SESSION group_concat_max_len = 1000000; SET #col_names = ( SELECT GROUP_CONCAT(QUOTE(`column_name`)) AS columns FROM information_schema.columns WHERE table_schema = #table_schema AND table_name = #table_name); SET #cols = CONCAT('(SELECT ', #col_names, ')'); SET #query = CONCAT('(SELECT * FROM ', #table_schema, '.', #table_name, ' INTO OUTFILE \'/tmp/your_csv_file.csv\' FIELDS ENCLOSED BY \'\\\'\' TERMINATED BY \'\t\' ESCAPED BY \'\' LINES TERMINATED BY \'\n\')'); /* Concatenates column names to query */ SET #sql = CONCAT(#cols, ' UNION ALL ', #query); /* Resets Group Contact Max Limit back to original value */ SET SESSION group_concat_max_len = #default_group_concat_max_len; PREPARE stmt FROM #sql; EXECUTE stmt; DEALLOCATE PREPARE stmt;
I would like to add to the answer provided by Sangam Belose. Here's his code: select ('id') as id, ('time') as time, ('unit') as unit UNION ALL SELECT * INTO OUTFILE 'C:/Users/User/Downloads/data.csv' FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' LINES TERMINATED BY '\n' FROM sensor However, if you have not set up your "secure_file_priv" within the variables, it may not work. For that, check the folder set on that variable by: SHOW VARIABLES LIKE "secure_file_priv" The output should look like this: mysql> show variables like "%secure_file_priv%"; +------------------+------------------------------------------------+ | Variable_name | Value | +------------------+------------------------------------------------+ | secure_file_priv | C:\ProgramData\MySQL\MySQL Server 8.0\Uploads\ | +------------------+------------------------------------------------+ 1 row in set, 1 warning (0.00 sec) You can either change this variable or change the query to output the file to the default path showing.
MySQL alone isn't enough to do this simply. Below is a PHP script that will output columns and data to CSV. Enter your database name and tables near the top. <?php set_time_limit( 24192000 ); ini_set( 'memory_limit', '-1' ); setlocale( LC_CTYPE, 'en_US.UTF-8' ); mb_regex_encoding( 'UTF-8' ); $dbn = 'DB_NAME'; $tbls = array( 'TABLE1', 'TABLE2', 'TABLE3' ); $db = new PDO( 'mysql:host=localhost;dbname=' . $dbn . ';charset=UTF8', 'root', 'pass' ); foreach( $tbls as $tbl ) { echo $tbl . "\n"; $path = '/var/lib/mysql/' . $tbl . '.csv'; $colStr = ''; $cols = $db->query( 'SELECT COLUMN_NAME AS `column` FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = "' . $tbl . '" AND TABLE_SCHEMA = "' . $dbn . '"' )->fetchAll( PDO::FETCH_COLUMN ); foreach( $cols as $col ) { if( $colStr ) $colStr .= ', '; $colStr .= '"' . $col . '"'; } $db->query( 'SELECT * FROM ( SELECT ' . $colStr . ' UNION ALL SELECT * FROM ' . $tbl . ' ) AS sub INTO OUTFILE "' . $path . '" FIELDS TERMINATED BY "," ENCLOSED BY "\"" LINES TERMINATED BY "\n"' ); exec( 'gzip ' . $path ); print_r( $db->errorInfo() ); } ?> You'll need this to be the directory you'd like to output to. MySQL needs to have the ability to write to the directory. $path = '/var/lib/mysql/' . $tbl . '.csv'; You can edit the CSV export options in the query: INTO OUTFILE "' . $path . '" FIELDS TERMINATED BY "," ENCLOSED BY "\"" LINES TERMINATED BY "\n"' At the end there is an exec call to GZip the CSV.
I had no luck with any of these, so after finding a solution, I wanted to add it to the prior answers. Python==3.8.6 MySQL==8.0.19 (Forgive my lack of SO formatting foo. Somebody please clean up.) Note a couple of things: First, the query to return column names is unforgiving of punctuation. Using ` backticks or leaving out ' quote around the 'schema_name' and 'table_name' will throw an "unknown column" error. WHERE TABLE_SCHEMA = 'schema' AND TABLE_NAME = 'table' Second, the column header names return as a single-entity tuple with all the column names concatenated in one quoted string. Convert to quoted list was easy, but not intuitive (for me at least). headers_list = headers_result[0].split(",") Third, cursor must be buffered or the "lazy" thing will not fetch your results as you need them. For very large tables, memory could be an issue. Perhaps chunking would solve that problem. cur = db.cursor(buffered=True) Last, all types of UNION attempts yielded errors for me. By zipping the whole mess into a list of dicts, it became trivial to write to a csv, using csv.DictWriter. headers_sql = """ SELECT GROUP_CONCAT(CONCAT(COLUMN_NAME) ORDER BY ORDINAL_POSITION) FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_SCHEMA = 'schema' AND TABLE_NAME = 'table'; """" cur = db.cursor(buffered=True) cur.execute(header_sql) headers_result = cur.fetchone() headers_list = headers_result[0].split(",") rows_sql = """ SELECT * FROM schema.table; """" data = cur.execute(rows_sql) data_rows = cur.fetchall() data_as_list_of_dicts = [dict(zip(headers_list, row)) for row in data_rows] with open(csv_destination_file, 'w', encoding='utf-8') as destination_file_opened: dict_writer = csv.DictWriter(destination_file_opened, fieldnames=headers_list) dict_writer.writeheader() for dict in dict_list: dict_writer.writerow(dict)
Solution using python but no need to install a python package to read sql files if you already use another tool. If you are not familiar with python you can run the python codes in a colab notebook, all the required packages are already installed. It automates Matt and Joe's solutions. Firstly execute this SQL script to get a csv with all table names : SELECT TABLE_NAME FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_TYPE = 'BASE TABLE' AND TABLE_SCHEMA='your_schema' INTO OUTFILE 'C:/ProgramData/MySQL/MySQL Server 8.0/Uploads/tables.csv'; Then move tables.csv to a suitable directory and execute this python code after having replaced 'path_to_tables' and 'your_schema'. It will generate a sql script to export all tables headers: import pandas as pd import os tables = pd.read_csv('tables.csv',header = None)[0] text_file = open("export_headers.sql", "w") schema = 'your_schema' sql_output_path = 'C:/ProgramData/MySQL/MySQL Server 8.0/Uploads/' for table in tables : path = os.path.join(sql_output_path,'{}_header.csv'.format(table)) string = "(select GROUP_CONCAT(COLUMN_NAME)\nfrom INFORMATION_SCHEMA.COLUMNS\nWHERE TABLE_NAME = '{}'\nAND TABLE_SCHEMA = '{}'\norder BY ORDINAL_POSITION)\nINTO OUTFILE '{}';".format(table,schema,path) n = text_file.write(string) n = text_file.write('\n\n') text_file.close() Then execute this python code which will generate a sql script to export the values of all tables: text_file = open("export_values.sql", "w") for table in tables : path = os.path.join(sql_output_path,'{}.csv'.format(table)) string = "SELECT * FROM {}.{}\nINTO OUTFILE '{}';".format(schema,table,path) n = text_file.write(string) n = text_file.write('\n\n') text_file.close() Execute the two generated sql scripts and move the header csvs and values csvs in directories of your choice. Then execute this last python code : #Respectively the path to the headers csvs, the values csv and the path where you want to put the csvs with headers and values combined headers_path, values_path, tables_path = '', '', '' for table in tables : header = pd.read_csv(os.path.join(headers_path,'{}_header.csv'.format(table))) df = pd.read_csv(os.path.join(values_path,'{}.csv'.format(table)),names = header.columns,sep = '\t') df.to_csv(os.path.join(tables_path,'{}.csv'.format(table)),index = False) Then you got all your table exported in csv with the headers without having to write or copy paste all the tables and columns names.
Inspired by pivot table example from Rick James. SET #CSVTABLE = 'myTableName', #CSVBASE = 'databaseName', #CSVFILE = '/tmp/filename.csv'; SET #sql = (SELECT CONCAT("SELECT ", GROUP_CONCAT(CONCAT('"', COLUMN_NAME, '"')), " UNION SELECT * FROM ", #CSVBASE, ".", #CSVTABLE) FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME=#CSVTABLE AND TABLE_SCHEMA=#CSVBASE); prepare stmt from CONCAT(#sql, " INTO OUTFILE '", #CSVFILE, "' FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '\"' LINES TERMINATED BY '\\n';"); execute stmt; It gets list of columns from INFORMATION_SCHEMA.COLUMNS table, and uses GROUP_CONCAT to prepare SELECT statement with list of strings with column names. Next UNION is added with SELECT * FROM specified database.table - this creates query text that will output both column names and column values in result. Now the statement is prepared using previously created query (stored in #sql variable), CSV output specific "things" are appended to query and finally statement is executed with execute stmt
SELECT 'ColName1', 'ColName2', 'ColName3' UNION ALL SELECT ColName1, ColName2, ColName3 FROM YourTable INTO OUTFILE 'c:\\datasheet.csv' FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' LINES TERMINATED BY '\n'
How to convert MySql column names to lowercase for better readability?
I've got a db with a dozen of tables inside of it that have irregularity regarding the low- and uppercase usage. Is there a simple way to convert all column names in all mysql-tables to lowercase ? Maybe with the exception of strings ending with "_ID"? I know that mysql isn't case sensitive. This is all about better readability. If you have column names like "Author", "AUTHOR_ID" and "AuthorName" in one table it's hard to read and I like to have it more consistent using lowercase.
Edit: - Open phpMyAdmin. -> Hit Export and export the file as *.sql file. -> Edit the SQL File and you will find alot of Create table and such queries -> Edit the names in your text editor and Save it. -> Open phpMyAdmin and delete all tables and import your earlier saved *.sql file and run it and it should do the trick! Otherwise: Different datatype have different requirements so you need the UNIONs: SELECT 'ALTER TABLE '||table_name||' CHANGE '|| column_name||' '||lower(column_name)||' '||datatype||'('||CHAR(character_maximum_length)||');' AS Line FROM information_schema.columns WHERE table_schema = dbname and datatype in ( 'CHAR', 'VARCHAR' ) ORDER BY table_name UNION SELECT 'ALTER TABLE '||table_name||' CHANGE '|| column_name||' '||lower(column_name)||' '||datatype||'('||CHAR(numeric_precision)||');' AS Line FROM information_schema.columns WHERE table_schema = dbname and datatype in ( 'INTEGER' ) ORDER BY table_name UNION SELECT 'ALTER TABLE '||table_name||' CHANGE '|| column_name||' '||lower(column_name)||' '||datatype||'('||CHAR(numeric_precision)||','||CHAR(numeric_scale)|');' AS Line FROM information_schema.columns WHERE table_schema = dbname and datatype in ( 'FLOAT' ) ORDER BY table_name UNION SELECT 'ALTER TABLE '||table_name||' CHANGE '|| column_name||' '||lower(column_name)||' '||datatype||');' AS Line FROM information_schema.columns WHERE table_schema = dbname and datatype in ( 'DATE' ) ORDER BY table_name Also: a MYSQL script to convert the column names to lowercase
This: $sql = mysqli_query($dbConn,"SHOW TABLES"); $tables=array(); while($table_row = mysqli_fetch_array($sql)) { $tables[]=$table_row[0]; } foreach($tables as $table){ $sql = mysqli_query($dbConn,"SHOW COLUMNS FROM ".$table); $query = "ALTER TABLE ".$table; while($column_row = mysqli_fetch_array($sql)) { if(substr($column_row['Field'],-3)=="_ID"){ $new_column = strtolower(substr($column_row['Field'],0,-3))."_ID"; } else { $new_column = strtolower($column_row['Field']); } $query .= " CHANGE COLUMN ". " " .$column_row['Field']." ".$new_column. " " . $column_row['Type'] . " " . (($column_row['Null']=='YES')?'NULL':'NOT NULL') . " " . (($column_row['Default']=='')?'':' DEFAULT '.$column_row['Default']) . " " . $column_row['Extra'] . ","; } $query = rtrim($query,','); echo $query; echo "<br><br><br>"; } Will give you a list of all the ALTER TABLE statements for each column in each table. Warning: I've made the above query to print all the ALTER statements on screen, however, it's upto you to decide whether to execute it or not. I'm not sure whether your database will remain the same after executing them because they may not cover all the possible data types and conditions you have selected for each column. Do the above, it won't make the changes to your database and see if any of your columns are missing any significant datatype or conditional value.
hello example code for you you may help you. $c1 = mysql_connect("localhost","root","");// Connection $db1 = mysql_select_db("INFORMATION_SCHEMA"); $get_column = mysql_query("SELECT * FROM `INFORMATION_SCHEMA`.`COLUMNS` WHERE `TABLE_SCHEMA`='data_base_name' AND `TABLE_NAME`='table_name'"); while($row = mysql_fetch_assoc($get_column)){ $old_name = $row['COLUMN_NAME']; $new_name = strtolower($row['COLUMN_NAME']); $datatype= $row['DATA_TYPE']; $size = $row['CHARACTER_MAXIMUM_LENGTH']; if($row['DATA_TYPE'] !="varchar" && $row['DATA_TYPE'] !="text"){ $query = "ALTER TABLE mstusers CHANGE $old_name $new_name $datatype".";<br/>"; }else{ $query = "ALTER TABLE mstusers CHANGE $old_name $new_name $datatype ($size)".";<br/>"; } echo $query; } // Query paste in your phpmyadmin Please check this link for more detail http://myphpinformation.blogspot.in/2016/10/convert-column-name-into-lowercase-in-mysql-and-php.html
# : = operators / keywords in mysql [duplicate]
I have an INSERT statement in a PHP-file wherein at-signs (#) are occurring in front of the column name. #field1, #field2, It is a MySQL database. What does the at-sign mean? Edit: There is no SET #field1 := 'test' in the PHP script. The PHP script reads a csv and puts the data into the table. Can it be misused as a commenting out feature? <?php $typo_db_username = 'xyz'; // Modified or inserted by TYPO3 Install Tool. $typo_db_password = 'xyz'; // Modified or inserted by TYPO3 Install Tool. // login $_SESSION['host'] = "localhost"; $_SESSION['port'] = "3306"; $_SESSION['user'] = $typo_db_username; $_SESSION['password'] = $typo_db_password; $_SESSION['dbname'] = "database"; $cxn = mysqli_connect($_SESSION['host'], $_SESSION['user'], $_SESSION['password'], $_SESSION['dbname'], $_SESSION['port']) or die ("SQL Error:" . mysqli_connect_error() ); mysqli_query($cxn, "SET NAMES utf8"); $sqltrunc = "TRUNCATE TABLE tablename"; $resulttrunc = mysqli_query($cxn,$sqltrunc) or die ("Couldn’t execute query: ".mysqli_error($cxn)); $sql1 = " LOAD DATA LOCAL INFILE 'import.csv' REPLACE INTO TABLE tablename FIELDS TERMINATED BY ';' OPTIONALLY ENCLOSED BY '\"' IGNORE 1 LINES ( `normalField`, #field1, #field2, `normalField2`, #field3, #field4 )"; $result1 = mysqli_query($cxn,$sql1) or die ("Couldn’t execute query: " . mysqli_error($cxn)); ?>' SOLUTION: Finally, I found it out! The # field is used as dummy to miss out a column in a csv-file. See http://www.php-resource.de/forum/showthread/t-97082.html http://dev.mysql.com/doc/refman/5.0/en/load-data.html
The # sign is a variable in SQL. In MySQL it is used to store a value between consecutive runs of a query, or to transfer data between two different queries. An example Transfer data between two queries SELECT #biggest:= MAX(field1) FROM atable; SELECT * FROM bigger_table WHERE field1 > #biggest; Another usage is in ranking, which MySQL doesn't have native support for. Store a value for consecutive runs of a query INSERT INTO table2 SELECT #rank := #rank + 1, table1.* FROM table1 JOIN( SELECT #rank := 0 ) AS init ORDER BY number_of_users DESC Note that in order for this to work, the order in which the rows get processed in the query must be fixed, it's easy to get this wrong. See: http://dev.mysql.com/doc/refman/5.0/en/user-variables.html mysql sorting and ranking statement http://www.xaprb.com/blog/2006/12/15/advanced-mysql-user-variable-techniques/ UPDATE This code will never work. You've just opened the connection before and nowhere are the #fields set. So currently they hold null values. To top that, you cannot use #vars to denote fieldnames, you can only use #vars for values. $sql1 = " LOAD DATA LOCAL INFILE 'import.csv' REPLACE INTO TABLE tablename FIELDS TERMINATED BY ';' OPTIONALLY ENCLOSED BY '\"' IGNORE 1 LINES (`normalField`, #field1, #field2, `normalField2`, #field3, #field4)";
MySQL: Query for list of available options for SET
In particular table there exists a field of SET type with specific legal values: personType SET('CUSTOMER','SUPPLIER','EMPLOYEE', 'CONTRACTOR') NOT NULL Is there any way to query MySQL to get a list of the valid values? In the MySQL interpreter I would just run DESCRIBE someTable; however if there is a more direct method that one could use programmatically without lots of parsing it would be nice. Thanks.
Now, this simply freaks out, but it is MySQL-only and it works! SELECT TRIM("'" FROM SUBSTRING_INDEX(SUBSTRING_INDEX( (SELECT TRIM(')' FROM SUBSTR(column_type, 5)) FROM information_schema.columns WHERE table_name = 'some_table' AND column_name = 'some_column'), ',', #r:=#r+1), ',', -1)) AS item FROM (SELECT #r:=0) deriv1, (SELECT ID FROM information_schema.COLLATIONS) deriv2 HAVING #r <= (SELECT LENGTH(column_type) - LENGTH(REPLACE(column_type, ',', '')) FROM information_schema.columns WHERE table_name = 'some_table' AND column_name = 'some_column'); Just replace "some_table" and "some_column" for your specific table/column, and see the magic! You will see a weird usage of "information_schema.COLLATIONS" - this is because we need a table there - any table - containing at least N rows, where N is the number of elements in your set.
SELECT column_type FROM information_schema.columns WHERE table_name = 'some_table' AND column_name = 'some_column'; Returns: column_type ------------------ set('this','that')
The function below returns an array containing all available options for SET with some parsing but not "lots of parsing"... :) function get_set_values($table_name, $field_name) { $sql = 'DESCRIBE ' . $table_name . ' ' . $field_name; $result = mysql_query($sql); $row = mysql_fetch_array($result); return str_getcsv( trim( substr( $row['Type'], 3 ), '()' ), ',', "'" ); } Remember that in a set column you may have a combination of values or even an empty value (these are also valid).
At-Sign in SQL statement before column name
I have an INSERT statement in a PHP-file wherein at-signs (#) are occurring in front of the column name. #field1, #field2, It is a MySQL database. What does the at-sign mean? Edit: There is no SET #field1 := 'test' in the PHP script. The PHP script reads a csv and puts the data into the table. Can it be misused as a commenting out feature? <?php $typo_db_username = 'xyz'; // Modified or inserted by TYPO3 Install Tool. $typo_db_password = 'xyz'; // Modified or inserted by TYPO3 Install Tool. // login $_SESSION['host'] = "localhost"; $_SESSION['port'] = "3306"; $_SESSION['user'] = $typo_db_username; $_SESSION['password'] = $typo_db_password; $_SESSION['dbname'] = "database"; $cxn = mysqli_connect($_SESSION['host'], $_SESSION['user'], $_SESSION['password'], $_SESSION['dbname'], $_SESSION['port']) or die ("SQL Error:" . mysqli_connect_error() ); mysqli_query($cxn, "SET NAMES utf8"); $sqltrunc = "TRUNCATE TABLE tablename"; $resulttrunc = mysqli_query($cxn,$sqltrunc) or die ("Couldn’t execute query: ".mysqli_error($cxn)); $sql1 = " LOAD DATA LOCAL INFILE 'import.csv' REPLACE INTO TABLE tablename FIELDS TERMINATED BY ';' OPTIONALLY ENCLOSED BY '\"' IGNORE 1 LINES ( `normalField`, #field1, #field2, `normalField2`, #field3, #field4 )"; $result1 = mysqli_query($cxn,$sql1) or die ("Couldn’t execute query: " . mysqli_error($cxn)); ?>' SOLUTION: Finally, I found it out! The # field is used as dummy to miss out a column in a csv-file. See http://www.php-resource.de/forum/showthread/t-97082.html http://dev.mysql.com/doc/refman/5.0/en/load-data.html
The # sign is a variable in SQL. In MySQL it is used to store a value between consecutive runs of a query, or to transfer data between two different queries. An example Transfer data between two queries SELECT #biggest:= MAX(field1) FROM atable; SELECT * FROM bigger_table WHERE field1 > #biggest; Another usage is in ranking, which MySQL doesn't have native support for. Store a value for consecutive runs of a query INSERT INTO table2 SELECT #rank := #rank + 1, table1.* FROM table1 JOIN( SELECT #rank := 0 ) AS init ORDER BY number_of_users DESC Note that in order for this to work, the order in which the rows get processed in the query must be fixed, it's easy to get this wrong. See: http://dev.mysql.com/doc/refman/5.0/en/user-variables.html mysql sorting and ranking statement http://www.xaprb.com/blog/2006/12/15/advanced-mysql-user-variable-techniques/ UPDATE This code will never work. You've just opened the connection before and nowhere are the #fields set. So currently they hold null values. To top that, you cannot use #vars to denote fieldnames, you can only use #vars for values. $sql1 = " LOAD DATA LOCAL INFILE 'import.csv' REPLACE INTO TABLE tablename FIELDS TERMINATED BY ';' OPTIONALLY ENCLOSED BY '\"' IGNORE 1 LINES (`normalField`, #field1, #field2, `normalField2`, #field3, #field4)";