# : = operators / keywords in mysql [duplicate] - mysql

I have an INSERT statement in a PHP-file wherein at-signs (#) are occurring in front of the column name.
#field1,
#field2,
It is a MySQL database. What does the at-sign mean?
Edit:
There is no SET #field1 := 'test' in the PHP script. The PHP script reads a csv and puts the data into the table. Can it be misused as a commenting out feature?
<?php
$typo_db_username = 'xyz'; // Modified or inserted by TYPO3 Install Tool.
$typo_db_password = 'xyz'; // Modified or inserted by TYPO3 Install Tool.
// login
$_SESSION['host'] = "localhost";
$_SESSION['port'] = "3306";
$_SESSION['user'] = $typo_db_username;
$_SESSION['password'] = $typo_db_password;
$_SESSION['dbname'] = "database";
$cxn = mysqli_connect($_SESSION['host'], $_SESSION['user'], $_SESSION['password'], $_SESSION['dbname'], $_SESSION['port']) or die ("SQL Error:" . mysqli_connect_error() );
mysqli_query($cxn, "SET NAMES utf8");
$sqltrunc = "TRUNCATE TABLE tablename";
$resulttrunc = mysqli_query($cxn,$sqltrunc) or die ("Couldn’t execute query: ".mysqli_error($cxn));
$sql1 = "
LOAD DATA LOCAL
INFILE 'import.csv'
REPLACE
INTO TABLE tablename
FIELDS
TERMINATED BY ';'
OPTIONALLY ENCLOSED BY '\"'
IGNORE 1 LINES
(
`normalField`,
#field1,
#field2,
`normalField2`,
#field3,
#field4
)";
$result1 = mysqli_query($cxn,$sql1) or die ("Couldn’t execute query: " . mysqli_error($cxn));
?>'
SOLUTION:
Finally, I found it out! The # field is used as dummy to miss out a column in a csv-file. See http://www.php-resource.de/forum/showthread/t-97082.html
http://dev.mysql.com/doc/refman/5.0/en/load-data.html

The # sign is a variable in SQL.
In MySQL it is used to store a value between consecutive runs of a query, or to transfer data between two different queries.
An example
Transfer data between two queries
SELECT #biggest:= MAX(field1) FROM atable;
SELECT * FROM bigger_table WHERE field1 > #biggest;
Another usage is in ranking, which MySQL doesn't have native support for.
Store a value for consecutive runs of a query
INSERT INTO table2
SELECT #rank := #rank + 1, table1.* FROM table1
JOIN( SELECT #rank := 0 ) AS init
ORDER BY number_of_users DESC
Note that in order for this to work, the order in which the rows get processed in the query must be fixed, it's easy to get this wrong.
See:
http://dev.mysql.com/doc/refman/5.0/en/user-variables.html
mysql sorting and ranking statement
http://www.xaprb.com/blog/2006/12/15/advanced-mysql-user-variable-techniques/
UPDATE
This code will never work.
You've just opened the connection before and nowhere are the #fields set.
So currently they hold null values.
To top that, you cannot use #vars to denote fieldnames, you can only use #vars for values.
$sql1 = "
LOAD DATA LOCAL INFILE 'import.csv'
REPLACE INTO TABLE tablename
FIELDS TERMINATED BY ';' OPTIONALLY ENCLOSED BY '\"'
IGNORE 1 LINES
(`normalField`, #field1, #field2, `normalField2`, #field3, #field4)";

Related

How to export data to .psv file in mysql with table headers? [duplicate]

Is it possible to include the headers somehow when using the MySQL INTO OUTFILE?
You'd have to hard code those headers yourself. Something like:
SELECT 'ColName1', 'ColName2', 'ColName3'
UNION ALL
SELECT ColName1, ColName2, ColName3
FROM YourTable
INTO OUTFILE '/path/outfile'
The solution provided by Joe Steanelli works, but making a list of columns is inconvenient when dozens or hundreds of columns are involved. Here's how to get column list of table my_table in my_schema.
-- override GROUP_CONCAT limit of 1024 characters to avoid a truncated result
set session group_concat_max_len = 1000000;
select GROUP_CONCAT(CONCAT("'",COLUMN_NAME,"'"))
from INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'my_table'
AND TABLE_SCHEMA = 'my_schema'
order BY ORDINAL_POSITION
Now you can copy & paste the resulting row as first statement in Joe's method.
For complex select with ORDER BY I use the following:
SELECT * FROM (
SELECT 'Column name #1', 'Column name #2', 'Column name ##'
UNION ALL
(
// complex SELECT statement with WHERE, ORDER BY, GROUP BY etc.
)
) resulting_set
INTO OUTFILE '/path/to/file';
This will alow you to have ordered columns and/or a limit
SELECT 'ColName1', 'ColName2', 'ColName3'
UNION ALL
SELECT * from (SELECT ColName1, ColName2, ColName3
FROM YourTable order by ColName1 limit 3) a
INTO OUTFILE '/path/outfile';
You can use prepared statement with lucek's answer and export dynamically table with columns name in CSV :
--If your table has too many columns
SET GLOBAL group_concat_max_len = 100000000;
--Prepared statement
SET #SQL = ( select CONCAT('SELECT * INTO OUTFILE \'YOUR_PATH\' FIELDS TERMINATED BY \',\' OPTIONALLY ENCLOSED BY \'"\' ESCAPED BY \'\' LINES TERMINATED BY \'\\n\' FROM (SELECT ', GROUP_CONCAT(CONCAT("'",COLUMN_NAME,"'")),' UNION select * from YOUR_TABLE) as tmp') from INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = 'YOUR_TABLE' AND TABLE_SCHEMA = 'YOUR_SCHEMA' order BY ORDINAL_POSITION );
--Execute it
PREPARE stmt FROM #SQL;
EXECUTE stmt;
Thank lucek.
I simply make 2 queries, first to get query output (limit 1) with column names (no hardcode, no problems with Joins, Order by, custom column names, etc), and second to make query itself, and combine files into one CSV file:
CSVHEAD=`/usr/bin/mysql $CONNECTION_STRING -e "$QUERY limit 1;"|head -n1|xargs|sed -e "s/ /'\;'/g"`
echo "\'$CSVHEAD\'" > $TMP/head.txt
/usr/bin/mysql $CONNECTION_STRING -e "$QUERY into outfile '${TMP}/data.txt' fields terminated by ';' optionally enclosed by '\"' escaped by '' lines terminated by '\r\n';"
cat $TMP/head.txt $TMP/data.txt > $TMP/data.csv
This is an alternative cheat if you are familiar with Python or R, and your table can fit into memory.
Import the SQL table into Python or R and then export from there as a CSV and you'll get the column names as well as the data.
Here's how I do it using R, requires the RMySQL library:
db <- dbConnect(MySQL(), user='user', password='password', dbname='myschema', host='localhost')
query <- dbSendQuery(db, "select * from mytable")
dataset <- fetch(query, n=-1)
write.csv(dataset, 'mytable_backup.csv')
It's a bit of a cheat but I found this was a quick workaround when my number of columns was too long to use the concat method above. Note: R will add a 'row.names' column at the start of the CSV so you'll want to drop that if you do need to rely on the CSV to recreate the table.
I faced similar problem while executing mysql query on large tables in NodeJS. The approach which I followed to include headers in my CSV file is as follows
Use OUTFILE query to prepare file without headers
SELECT * INTO OUTFILE [FILE_NAME] FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED
BY '\"' LINES TERMINATED BY '\n' FROM [TABLE_NAME]
Fetch column headers for the table used in point 1
select GROUP_CONCAT(CONCAT(\"\",COLUMN_NAME,\"\")) as col_names from
INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = [TABLE_NAME] AND TABLE_SCHEMA
= [DATABASE_NAME] ORDER BY ORDINAL_POSITION
Append the column headers to the file created in step 1 using prepend-file npm package
Execution of each step was controlled using promises in NodeJS.
I think if you use a UNION it will work:
select 'header 1', 'header 2', ...
union
select col1, col2, ... from ...
I don't know of a way to specify the headers with the INTO OUTFILE syntax directly.
Since the 'include-headers' functionality doesn't seem to be build-in yet, and most "solutions" here need to type the columns names manually, and/or don't even take joins into account, I'd recommand to get around the problem.
The best alternative I found so far is using a decent tool (I use HeidiSQL).
Put your request, select the grid, just right click and export to a file. It got all necessary options for a clean export, ans should handle most needs.
In the same idea, user3037511's approach works fine, and can be automated easily.
Just launch your request with some command line to get your headers. You may get the data with a SELECT INTO OUTFILE... or by running your query without the limit, yours to choose.
Note that output redirect to a file works like a charm on both Linux AND Windows.
This makes me want to highlight that 80% of the time, when I want to use SELECT FROM INFILE or SELECT INTO OUTFILE, I end-up using something else due to some limitations (here, the absence of a 'headers options', on an AWS-RDS, the missing rights, and so on.)
Hence, I don't exactly answer to the op's question... but it should answer his needs :)
EDIT : and to actually answer his question : no
As of 2017-09-07, you just can't include headers if you stick with the SELECT INTO OUTFILE command :|
The easiest way is to hard code the columns yourself to better control the output file:
SELECT 'ColName1', 'ColName2', 'ColName3'
UNION ALL
SELECT ColName1, ColName2, ColName3
FROM YourTable
INTO OUTFILE '/path/outfile'
Actually you can make it work even with an ORDER BY.
Just needs some trickery in the order by statement - we use a case statement and replace the header value with some other value that is guaranteed to sort first in the list (obviously this is dependant on the type of field and whether you are sorting ASC or DESC)
Let's say you have three fields, name (varchar), is_active (bool), date_something_happens (date), and you want to sort the second two descending:
select
'name'
, 'is_active' as is_active
, date_something_happens as 'date_something_happens'
union all
select name, is_active, date_something_happens
from
my_table
order by
(case is_active when 'is_active' then 0 else is_active end) desc
, (case date when 'date' then '9999-12-30' else date end) desc
So, if all the columns in my_table are a character data type, we can combine the top answers (by Joe, matt and evilguc) together, to get the header added automatically in one 'simple' SQL query, e.g.
select * from (
(select column_name
from information_schema.columns
where table_name = 'my_table'
and table_schema = 'my_schema'
order by ordinal_position)
union all
(select * // potentially complex SELECT statement with WHERE, ORDER BY, GROUP BY etc.
from my_table)) as tbl
into outfile '/path/outfile'
fields terminated by ',' optionally enclosed by '"' escaped by '\\'
lines terminated by '\n';
where the last couple of lines make the output csv.
Note that this may be slow if my_table is very large.
an example from my database
table name sensor with colums (id,time,unit)
select ('id') as id, ('time') as time, ('unit') as unit
UNION ALL
SELECT * INTO OUTFILE 'C:/Users/User/Downloads/data.csv'
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY '\n'
FROM sensor
If you are using MySQL Workbench:
Select all the columns from the SCHEMAS tab -> Right Click -> Copy to
Clipboard -> Name
Paste it in any text editor and, Replace " ` " with " ' "
Copy it back and use it in your UNION query (as mentioned in the accepted
answer):
SELECT [Paste your text here]
UNION ALL
SELECT *
FROM table_name
INTO OUTFILE 'file_path'
I was writing my code in PHP, and I had a bit of trouble using concat and union functions, and also did not use SQL variables, any ways I got it to work, here is my code:
//first I connected to the information_scheme DB
$headercon=mysqli_connect("localhost", "USERNAME", "PASSWORD", "information_schema");
//took the healders out in a string (I could not get the concat function to work, so I wrote a loop for it)
$headers = '';
$sql = "SELECT column_name AS columns FROM `COLUMNS` WHERE table_schema = 'YOUR_DB_NAME' AND table_name = 'YOUR_TABLE_NAME'";
$result = $headercon->query($sql);
while($row = $result->fetch_row())
{
$headers = $headers . "'" . $row[0] . "', ";
}
$headers = substr("$headers", 0, -2);
// connect to the DB of interest
$con=mysqli_connect("localhost", "USERNAME", "PASSWORD", "YOUR_DB_NAME");
// export the results to csv
$sql4 = "SELECT $headers UNION SELECT * FROM YOUR_TABLE_NAME WHERE ... INTO OUTFILE '/output.csv' FIELDS TERMINATED BY ','";
$result4 = $con->query($sql4);
Here is a way to get the header titles from the column names dynamically.
/* Change table_name and database_name */
SET #table_name = 'table_name';
SET #table_schema = 'database_name';
SET #default_group_concat_max_len = (SELECT ##group_concat_max_len);
/* Sets Group Concat Max Limit larger for tables with a lot of columns */
SET SESSION group_concat_max_len = 1000000;
SET #col_names = (
SELECT GROUP_CONCAT(QUOTE(`column_name`)) AS columns
FROM information_schema.columns
WHERE table_schema = #table_schema
AND table_name = #table_name);
SET #cols = CONCAT('(SELECT ', #col_names, ')');
SET #query = CONCAT('(SELECT * FROM ', #table_schema, '.', #table_name,
' INTO OUTFILE \'/tmp/your_csv_file.csv\'
FIELDS ENCLOSED BY \'\\\'\' TERMINATED BY \'\t\' ESCAPED BY \'\'
LINES TERMINATED BY \'\n\')');
/* Concatenates column names to query */
SET #sql = CONCAT(#cols, ' UNION ALL ', #query);
/* Resets Group Contact Max Limit back to original value */
SET SESSION group_concat_max_len = #default_group_concat_max_len;
PREPARE stmt FROM #sql;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;
I would like to add to the answer provided by Sangam Belose. Here's his code:
select ('id') as id, ('time') as time, ('unit') as unit
UNION ALL
SELECT * INTO OUTFILE 'C:/Users/User/Downloads/data.csv'
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY '\n'
FROM sensor
However, if you have not set up your "secure_file_priv" within the variables, it may not work. For that, check the folder set on that variable by:
SHOW VARIABLES LIKE "secure_file_priv"
The output should look like this:
mysql> show variables like "%secure_file_priv%";
+------------------+------------------------------------------------+
| Variable_name | Value |
+------------------+------------------------------------------------+
| secure_file_priv | C:\ProgramData\MySQL\MySQL Server 8.0\Uploads\ |
+------------------+------------------------------------------------+
1 row in set, 1 warning (0.00 sec)
You can either change this variable or change the query to output the file to the default path showing.
MySQL alone isn't enough to do this simply. Below is a PHP script that will output columns and data to CSV.
Enter your database name and tables near the top.
<?php
set_time_limit( 24192000 );
ini_set( 'memory_limit', '-1' );
setlocale( LC_CTYPE, 'en_US.UTF-8' );
mb_regex_encoding( 'UTF-8' );
$dbn = 'DB_NAME';
$tbls = array(
'TABLE1',
'TABLE2',
'TABLE3'
);
$db = new PDO( 'mysql:host=localhost;dbname=' . $dbn . ';charset=UTF8', 'root', 'pass' );
foreach( $tbls as $tbl )
{
echo $tbl . "\n";
$path = '/var/lib/mysql/' . $tbl . '.csv';
$colStr = '';
$cols = $db->query( 'SELECT COLUMN_NAME AS `column` FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = "' . $tbl . '" AND TABLE_SCHEMA = "' . $dbn . '"' )->fetchAll( PDO::FETCH_COLUMN );
foreach( $cols as $col )
{
if( $colStr ) $colStr .= ', ';
$colStr .= '"' . $col . '"';
}
$db->query(
'SELECT *
FROM
(
SELECT ' . $colStr . '
UNION ALL
SELECT * FROM ' . $tbl . '
) AS sub
INTO OUTFILE "' . $path . '"
FIELDS TERMINATED BY ","
ENCLOSED BY "\""
LINES TERMINATED BY "\n"'
);
exec( 'gzip ' . $path );
print_r( $db->errorInfo() );
}
?>
You'll need this to be the directory you'd like to output to. MySQL needs to have the ability to write to the directory.
$path = '/var/lib/mysql/' . $tbl . '.csv';
You can edit the CSV export options in the query:
INTO OUTFILE "' . $path . '"
FIELDS TERMINATED BY ","
ENCLOSED BY "\""
LINES TERMINATED BY "\n"'
At the end there is an exec call to GZip the CSV.
I had no luck with any of these, so after finding a solution, I wanted to add it to the prior answers. Python==3.8.6 MySQL==8.0.19
(Forgive my lack of SO formatting foo. Somebody please clean up.)
Note a couple of things:
First, the query to return column names is unforgiving of punctuation. Using ` backticks or leaving out ' quote around the 'schema_name' and 'table_name' will throw an "unknown column" error.
WHERE TABLE_SCHEMA = 'schema' AND TABLE_NAME = 'table'
Second, the column header names return as a single-entity tuple with all the column names concatenated in one quoted string. Convert to quoted list was easy, but not intuitive (for me at least).
headers_list = headers_result[0].split(",")
Third, cursor must be buffered or the "lazy" thing will not fetch your results as you need them. For very large tables, memory could be an issue. Perhaps chunking would solve that problem.
cur = db.cursor(buffered=True)
Last, all types of UNION attempts yielded errors for me. By zipping the whole mess into a list of dicts, it became trivial to write to a csv, using csv.DictWriter.
headers_sql = """
SELECT
GROUP_CONCAT(CONCAT(COLUMN_NAME) ORDER BY ORDINAL_POSITION)
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_SCHEMA = 'schema' AND TABLE_NAME = 'table';
""""
cur = db.cursor(buffered=True)
cur.execute(header_sql)
headers_result = cur.fetchone()
headers_list = headers_result[0].split(",")
rows_sql = """ SELECT * FROM schema.table; """"
data = cur.execute(rows_sql)
data_rows = cur.fetchall()
data_as_list_of_dicts = [dict(zip(headers_list, row)) for row in data_rows]
with open(csv_destination_file, 'w', encoding='utf-8') as destination_file_opened:
dict_writer = csv.DictWriter(destination_file_opened, fieldnames=headers_list)
dict_writer.writeheader() for dict in dict_list:
dict_writer.writerow(dict)
Solution using python but no need to install a python package to read sql files if you already use another tool.
If you are not familiar with python you can run the python codes in a colab notebook, all the required packages are already installed. It automates Matt and Joe's solutions.
Firstly execute this SQL script to get a csv with all table names :
SELECT TABLE_NAME
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_TYPE = 'BASE TABLE' AND TABLE_SCHEMA='your_schema'
INTO OUTFILE 'C:/ProgramData/MySQL/MySQL Server 8.0/Uploads/tables.csv';
Then move tables.csv to a suitable directory and execute this python code after having replaced 'path_to_tables' and 'your_schema'. It will generate a sql script to export all tables headers:
import pandas as pd
import os
tables = pd.read_csv('tables.csv',header = None)[0]
text_file = open("export_headers.sql", "w")
schema = 'your_schema'
sql_output_path = 'C:/ProgramData/MySQL/MySQL Server 8.0/Uploads/'
for table in tables :
path = os.path.join(sql_output_path,'{}_header.csv'.format(table))
string = "(select GROUP_CONCAT(COLUMN_NAME)\nfrom INFORMATION_SCHEMA.COLUMNS\nWHERE TABLE_NAME = '{}'\nAND TABLE_SCHEMA = '{}'\norder BY ORDINAL_POSITION)\nINTO OUTFILE '{}';".format(table,schema,path)
n = text_file.write(string)
n = text_file.write('\n\n')
text_file.close()
Then execute this python code which will generate a sql script to export the values of all tables:
text_file = open("export_values.sql", "w")
for table in tables :
path = os.path.join(sql_output_path,'{}.csv'.format(table))
string = "SELECT * FROM {}.{}\nINTO OUTFILE '{}';".format(schema,table,path)
n = text_file.write(string)
n = text_file.write('\n\n')
text_file.close()
Execute the two generated sql scripts and move the header csvs and values csvs in directories of your choice.
Then execute this last python code :
#Respectively the path to the headers csvs, the values csv and the path where you want to put the csvs with headers and values combined
headers_path, values_path, tables_path = '', '', ''
for table in tables :
header = pd.read_csv(os.path.join(headers_path,'{}_header.csv'.format(table)))
df = pd.read_csv(os.path.join(values_path,'{}.csv'.format(table)),names = header.columns,sep = '\t')
df.to_csv(os.path.join(tables_path,'{}.csv'.format(table)),index = False)
Then you got all your table exported in csv with the headers without having to write or copy paste all the tables and columns names.
Inspired by pivot table example from Rick James.
SET #CSVTABLE = 'myTableName',
#CSVBASE = 'databaseName',
#CSVFILE = '/tmp/filename.csv';
SET #sql = (SELECT CONCAT("SELECT ", GROUP_CONCAT(CONCAT('"', COLUMN_NAME, '"')), " UNION SELECT * FROM ", #CSVBASE, ".", #CSVTABLE) FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME=#CSVTABLE AND TABLE_SCHEMA=#CSVBASE);
prepare stmt from CONCAT(#sql, " INTO OUTFILE '", #CSVFILE, "' FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '\"' LINES TERMINATED BY '\\n';");
execute stmt;
It gets list of columns from INFORMATION_SCHEMA.COLUMNS table, and uses GROUP_CONCAT to prepare SELECT statement with list of strings with column names.
Next UNION is added with SELECT * FROM specified database.table - this creates query text that will output both column names and column values in result.
Now the statement is prepared using previously created query (stored in #sql variable), CSV output specific "things" are appended to query and finally statement is executed with execute stmt
SELECT 'ColName1', 'ColName2', 'ColName3'
UNION ALL
SELECT ColName1, ColName2, ColName3
FROM YourTable
INTO OUTFILE 'c:\\datasheet.csv' FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' LINES TERMINATED BY '\n'

invalid byte sequence for encoding "UTF8": 0xed 0xa0 0xbd

I have been importing some data from MySQL to Postgres, the plan should have been simple- manually re-create the tables with their equivalent data types, divise a way to output as CSV, transfer over the data, copy it into Postgres. Done.
mysql -u whatever -p whatever -d the_database
SELECT * INTO OUTFILE '/tmp/the_table.csv' FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' ESCAPED BY '\\' FROM the_table;
send and import to postgres
psql -etcetc -d other_database
COPY the_table FROM '/csv/file/location/the_table.csv' WITH( FORMAT CSV, DELIMITER ',', QUOTE '"', ESCAPE '\', NULL '\N' );
It had been too long, I had forgotten that '0000-00-00' was a thing...
so first of all I had to come up with some way of addressing weird data types, preferably at the MySQL end and so wrote this script for the 20 or so tables I planned to import to address any imcompatabilities and list out the columns accordingly
with a as (
select
'the_table'::text as tblname,
'public'::text as schname
), b as (
select array_to_string( array_agg( x.column_name ), ',' ) as the_cols from (
select
case
when udt_name = 'timestamp'
then 'NULLIF('|| column_name::text || ',''0000-00-00 00:00:00'')'
when udt_name = 'date'
then 'NULLIF('|| column_name::text || ',''0000-00-00'')'
else column_name::text
end as column_name
from information_schema.columns, a
where table_schema = a.schname
and table_name = a.tblname
order by ordinal_position
) x
)
select 'SELECT '|| b.the_cols ||' INTO OUTFILE ''/tmp/'|| a.tblname ||'.csv'' FIELDS TERMINATED BY '','' OPTIONALLY ENCLOSED BY ''"'' ESCAPED BY ''\\'' FROM '|| a.tblname ||';' from a,b;
Generate CSV, ok. Transfer across, ok - Once over...
BEGIN;
ALTER TABLE the_table SET( autovacuum_enabled = false, toast.autovacuum_enabled = false );
COPY the_table FROM '/csv/file/location/the_table.csv' WITH( FORMAT CSV, DELIMITER ',', QUOTE '"', ESCAPE '\', NULL '\N' ); -- '
ALTER TABLE the_table SET( autovacuum_enabled = true, toast.autovacuum_enabled = true );
COMMIT;
and it was all going well, until I came across this message:
ERROR: invalid byte sequence for encoding "UTF8": 0xed 0xa0 0xbd
CONTEXT: COPY new_table, line 12345678
a second table also encountered the same error however every other one imported successfully.
Now all columns and tables in the MySQL db were set to utf8, the first offending table containing messages was along the lines of
CREATE TABLE whatever(
col1 int(11) NOT NULL AUTO_INCREMENT,
col2 date,
col3 int(11),
col4 int(11),
col5 int(11),
col6 int(11),
col7 varchar(64),
PRIMARY KEY(col1)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
So presumably the data should be utf... right? to make sure there were no major errors I edited the my.cnf to ensure everything I could think of to include the encoding
[character sets]
default-character-set=utf8
default-character-set=utf8
character-set-server = utf8
collation-server = utf8_unicode_ci
init-connect='SET NAMES utf8'
I altered my initial "query generating query" case statement to convert columns for the sake of converting
case
when udt_name = 'timestamp'
then 'NULLIF('|| column_name::text || ',''0000-00-00 00:00:00'')'
when udt_name = 'date'
then 'NULLIF('|| column_name::text || ',''0000-00-00'')'
when udt_name = 'text'
then 'CONVERT('|| column_name::text || ' USING utf8)'
else column_name::text
end as column_name
and still no luck. After googling "0xed 0xa0 0xbd" I am still none the wiser, character sets are not really my thing.
I even opened the 3 gig csv file to the line it mentioned and there didn't appear to be anything out of place, looking with a hex editor I could not see those byte values (edit: maybe I didn't look hard enough) so I am starting to run out of ideas. Am I missing something really simple, and worryingly, is it possible that some of the other tables may have been more "silently" corrupted too?
The MySQL version is 5.5.44 on a ubuntu 14.04 operating system and the Postgres is 9.4
Without any further things to try I went for the simplest solution, just alter the files
iconv -f utf-8 -t utf-8 -c the_file.csv > the_file_iconv.csv
there were about 100 bytes between the new files and the originals, so there must've been invalid bytes in there somewhere that I could not see, they imported "properly" so I suppose that is good, however it would be nice to know if there were some way to enforce proper encoding when creating the files before discovering about it on import.

Mysql convert 'CYYMM' to 'YYMM'

Suck with the following:
$loaddata = "LOAD DATA INFILE 'filename.csv'
INTO TABLE tb1
FIELDS TERMINATED BY ','
LINES TERMINATED BY '\\r\\n'
IGNORE 1 LINES
(
Entity,
HK,
#Period,
)
SET Period = STR_TO_DATE(#Period,'%C%YY%MM')
";
which gives me and sql syntax error near
) SET Period = STR_TO_DATE(#Period,'%C%YY%MM')
Period is a DATE variable. for the period Oct-13 the cvs will show 11310.
tks in advance!
You have a superfluous comma after #Period:
$loaddata = "LOAD DATA INFILE 'filename.csv'
INTO TABLE tb1
FIELDS TERMINATED BY ','
LINES TERMINATED BY '\\r\\n'
IGNORE 1 LINES
(
Entity,
HK,
#Period -- , removed here
)
SET Period = STR_TO_DATE(#Period,'%C%YY%MM')
";
However, your date format string is almost certainly incorrect. %C, %YY and %MM are invalid specifiers. See DATE_FORMAT().

At-Sign in SQL statement before column name

I have an INSERT statement in a PHP-file wherein at-signs (#) are occurring in front of the column name.
#field1,
#field2,
It is a MySQL database. What does the at-sign mean?
Edit:
There is no SET #field1 := 'test' in the PHP script. The PHP script reads a csv and puts the data into the table. Can it be misused as a commenting out feature?
<?php
$typo_db_username = 'xyz'; // Modified or inserted by TYPO3 Install Tool.
$typo_db_password = 'xyz'; // Modified or inserted by TYPO3 Install Tool.
// login
$_SESSION['host'] = "localhost";
$_SESSION['port'] = "3306";
$_SESSION['user'] = $typo_db_username;
$_SESSION['password'] = $typo_db_password;
$_SESSION['dbname'] = "database";
$cxn = mysqli_connect($_SESSION['host'], $_SESSION['user'], $_SESSION['password'], $_SESSION['dbname'], $_SESSION['port']) or die ("SQL Error:" . mysqli_connect_error() );
mysqli_query($cxn, "SET NAMES utf8");
$sqltrunc = "TRUNCATE TABLE tablename";
$resulttrunc = mysqli_query($cxn,$sqltrunc) or die ("Couldn’t execute query: ".mysqli_error($cxn));
$sql1 = "
LOAD DATA LOCAL
INFILE 'import.csv'
REPLACE
INTO TABLE tablename
FIELDS
TERMINATED BY ';'
OPTIONALLY ENCLOSED BY '\"'
IGNORE 1 LINES
(
`normalField`,
#field1,
#field2,
`normalField2`,
#field3,
#field4
)";
$result1 = mysqli_query($cxn,$sql1) or die ("Couldn’t execute query: " . mysqli_error($cxn));
?>'
SOLUTION:
Finally, I found it out! The # field is used as dummy to miss out a column in a csv-file. See http://www.php-resource.de/forum/showthread/t-97082.html
http://dev.mysql.com/doc/refman/5.0/en/load-data.html
The # sign is a variable in SQL.
In MySQL it is used to store a value between consecutive runs of a query, or to transfer data between two different queries.
An example
Transfer data between two queries
SELECT #biggest:= MAX(field1) FROM atable;
SELECT * FROM bigger_table WHERE field1 > #biggest;
Another usage is in ranking, which MySQL doesn't have native support for.
Store a value for consecutive runs of a query
INSERT INTO table2
SELECT #rank := #rank + 1, table1.* FROM table1
JOIN( SELECT #rank := 0 ) AS init
ORDER BY number_of_users DESC
Note that in order for this to work, the order in which the rows get processed in the query must be fixed, it's easy to get this wrong.
See:
http://dev.mysql.com/doc/refman/5.0/en/user-variables.html
mysql sorting and ranking statement
http://www.xaprb.com/blog/2006/12/15/advanced-mysql-user-variable-techniques/
UPDATE
This code will never work.
You've just opened the connection before and nowhere are the #fields set.
So currently they hold null values.
To top that, you cannot use #vars to denote fieldnames, you can only use #vars for values.
$sql1 = "
LOAD DATA LOCAL INFILE 'import.csv'
REPLACE INTO TABLE tablename
FIELDS TERMINATED BY ';' OPTIONALLY ENCLOSED BY '\"'
IGNORE 1 LINES
(`normalField`, #field1, #field2, `normalField2`, #field3, #field4)";

Include headers when using SELECT INTO OUTFILE?

Is it possible to include the headers somehow when using the MySQL INTO OUTFILE?
You'd have to hard code those headers yourself. Something like:
SELECT 'ColName1', 'ColName2', 'ColName3'
UNION ALL
SELECT ColName1, ColName2, ColName3
FROM YourTable
INTO OUTFILE '/path/outfile'
The solution provided by Joe Steanelli works, but making a list of columns is inconvenient when dozens or hundreds of columns are involved. Here's how to get column list of table my_table in my_schema.
-- override GROUP_CONCAT limit of 1024 characters to avoid a truncated result
set session group_concat_max_len = 1000000;
select GROUP_CONCAT(CONCAT("'",COLUMN_NAME,"'"))
from INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'my_table'
AND TABLE_SCHEMA = 'my_schema'
order BY ORDINAL_POSITION
Now you can copy & paste the resulting row as first statement in Joe's method.
For complex select with ORDER BY I use the following:
SELECT * FROM (
SELECT 'Column name #1', 'Column name #2', 'Column name ##'
UNION ALL
(
// complex SELECT statement with WHERE, ORDER BY, GROUP BY etc.
)
) resulting_set
INTO OUTFILE '/path/to/file';
This will alow you to have ordered columns and/or a limit
SELECT 'ColName1', 'ColName2', 'ColName3'
UNION ALL
SELECT * from (SELECT ColName1, ColName2, ColName3
FROM YourTable order by ColName1 limit 3) a
INTO OUTFILE '/path/outfile';
You can use prepared statement with lucek's answer and export dynamically table with columns name in CSV :
--If your table has too many columns
SET GLOBAL group_concat_max_len = 100000000;
--Prepared statement
SET #SQL = ( select CONCAT('SELECT * INTO OUTFILE \'YOUR_PATH\' FIELDS TERMINATED BY \',\' OPTIONALLY ENCLOSED BY \'"\' ESCAPED BY \'\' LINES TERMINATED BY \'\\n\' FROM (SELECT ', GROUP_CONCAT(CONCAT("'",COLUMN_NAME,"'")),' UNION select * from YOUR_TABLE) as tmp') from INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = 'YOUR_TABLE' AND TABLE_SCHEMA = 'YOUR_SCHEMA' order BY ORDINAL_POSITION );
--Execute it
PREPARE stmt FROM #SQL;
EXECUTE stmt;
Thank lucek.
I simply make 2 queries, first to get query output (limit 1) with column names (no hardcode, no problems with Joins, Order by, custom column names, etc), and second to make query itself, and combine files into one CSV file:
CSVHEAD=`/usr/bin/mysql $CONNECTION_STRING -e "$QUERY limit 1;"|head -n1|xargs|sed -e "s/ /'\;'/g"`
echo "\'$CSVHEAD\'" > $TMP/head.txt
/usr/bin/mysql $CONNECTION_STRING -e "$QUERY into outfile '${TMP}/data.txt' fields terminated by ';' optionally enclosed by '\"' escaped by '' lines terminated by '\r\n';"
cat $TMP/head.txt $TMP/data.txt > $TMP/data.csv
This is an alternative cheat if you are familiar with Python or R, and your table can fit into memory.
Import the SQL table into Python or R and then export from there as a CSV and you'll get the column names as well as the data.
Here's how I do it using R, requires the RMySQL library:
db <- dbConnect(MySQL(), user='user', password='password', dbname='myschema', host='localhost')
query <- dbSendQuery(db, "select * from mytable")
dataset <- fetch(query, n=-1)
write.csv(dataset, 'mytable_backup.csv')
It's a bit of a cheat but I found this was a quick workaround when my number of columns was too long to use the concat method above. Note: R will add a 'row.names' column at the start of the CSV so you'll want to drop that if you do need to rely on the CSV to recreate the table.
I faced similar problem while executing mysql query on large tables in NodeJS. The approach which I followed to include headers in my CSV file is as follows
Use OUTFILE query to prepare file without headers
SELECT * INTO OUTFILE [FILE_NAME] FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED
BY '\"' LINES TERMINATED BY '\n' FROM [TABLE_NAME]
Fetch column headers for the table used in point 1
select GROUP_CONCAT(CONCAT(\"\",COLUMN_NAME,\"\")) as col_names from
INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = [TABLE_NAME] AND TABLE_SCHEMA
= [DATABASE_NAME] ORDER BY ORDINAL_POSITION
Append the column headers to the file created in step 1 using prepend-file npm package
Execution of each step was controlled using promises in NodeJS.
I think if you use a UNION it will work:
select 'header 1', 'header 2', ...
union
select col1, col2, ... from ...
I don't know of a way to specify the headers with the INTO OUTFILE syntax directly.
Since the 'include-headers' functionality doesn't seem to be build-in yet, and most "solutions" here need to type the columns names manually, and/or don't even take joins into account, I'd recommand to get around the problem.
The best alternative I found so far is using a decent tool (I use HeidiSQL).
Put your request, select the grid, just right click and export to a file. It got all necessary options for a clean export, ans should handle most needs.
In the same idea, user3037511's approach works fine, and can be automated easily.
Just launch your request with some command line to get your headers. You may get the data with a SELECT INTO OUTFILE... or by running your query without the limit, yours to choose.
Note that output redirect to a file works like a charm on both Linux AND Windows.
This makes me want to highlight that 80% of the time, when I want to use SELECT FROM INFILE or SELECT INTO OUTFILE, I end-up using something else due to some limitations (here, the absence of a 'headers options', on an AWS-RDS, the missing rights, and so on.)
Hence, I don't exactly answer to the op's question... but it should answer his needs :)
EDIT : and to actually answer his question : no
As of 2017-09-07, you just can't include headers if you stick with the SELECT INTO OUTFILE command :|
The easiest way is to hard code the columns yourself to better control the output file:
SELECT 'ColName1', 'ColName2', 'ColName3'
UNION ALL
SELECT ColName1, ColName2, ColName3
FROM YourTable
INTO OUTFILE '/path/outfile'
Actually you can make it work even with an ORDER BY.
Just needs some trickery in the order by statement - we use a case statement and replace the header value with some other value that is guaranteed to sort first in the list (obviously this is dependant on the type of field and whether you are sorting ASC or DESC)
Let's say you have three fields, name (varchar), is_active (bool), date_something_happens (date), and you want to sort the second two descending:
select
'name'
, 'is_active' as is_active
, date_something_happens as 'date_something_happens'
union all
select name, is_active, date_something_happens
from
my_table
order by
(case is_active when 'is_active' then 0 else is_active end) desc
, (case date when 'date' then '9999-12-30' else date end) desc
So, if all the columns in my_table are a character data type, we can combine the top answers (by Joe, matt and evilguc) together, to get the header added automatically in one 'simple' SQL query, e.g.
select * from (
(select column_name
from information_schema.columns
where table_name = 'my_table'
and table_schema = 'my_schema'
order by ordinal_position)
union all
(select * // potentially complex SELECT statement with WHERE, ORDER BY, GROUP BY etc.
from my_table)) as tbl
into outfile '/path/outfile'
fields terminated by ',' optionally enclosed by '"' escaped by '\\'
lines terminated by '\n';
where the last couple of lines make the output csv.
Note that this may be slow if my_table is very large.
an example from my database
table name sensor with colums (id,time,unit)
select ('id') as id, ('time') as time, ('unit') as unit
UNION ALL
SELECT * INTO OUTFILE 'C:/Users/User/Downloads/data.csv'
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY '\n'
FROM sensor
If you are using MySQL Workbench:
Select all the columns from the SCHEMAS tab -> Right Click -> Copy to
Clipboard -> Name
Paste it in any text editor and, Replace " ` " with " ' "
Copy it back and use it in your UNION query (as mentioned in the accepted
answer):
SELECT [Paste your text here]
UNION ALL
SELECT *
FROM table_name
INTO OUTFILE 'file_path'
I was writing my code in PHP, and I had a bit of trouble using concat and union functions, and also did not use SQL variables, any ways I got it to work, here is my code:
//first I connected to the information_scheme DB
$headercon=mysqli_connect("localhost", "USERNAME", "PASSWORD", "information_schema");
//took the healders out in a string (I could not get the concat function to work, so I wrote a loop for it)
$headers = '';
$sql = "SELECT column_name AS columns FROM `COLUMNS` WHERE table_schema = 'YOUR_DB_NAME' AND table_name = 'YOUR_TABLE_NAME'";
$result = $headercon->query($sql);
while($row = $result->fetch_row())
{
$headers = $headers . "'" . $row[0] . "', ";
}
$headers = substr("$headers", 0, -2);
// connect to the DB of interest
$con=mysqli_connect("localhost", "USERNAME", "PASSWORD", "YOUR_DB_NAME");
// export the results to csv
$sql4 = "SELECT $headers UNION SELECT * FROM YOUR_TABLE_NAME WHERE ... INTO OUTFILE '/output.csv' FIELDS TERMINATED BY ','";
$result4 = $con->query($sql4);
Here is a way to get the header titles from the column names dynamically.
/* Change table_name and database_name */
SET #table_name = 'table_name';
SET #table_schema = 'database_name';
SET #default_group_concat_max_len = (SELECT ##group_concat_max_len);
/* Sets Group Concat Max Limit larger for tables with a lot of columns */
SET SESSION group_concat_max_len = 1000000;
SET #col_names = (
SELECT GROUP_CONCAT(QUOTE(`column_name`)) AS columns
FROM information_schema.columns
WHERE table_schema = #table_schema
AND table_name = #table_name);
SET #cols = CONCAT('(SELECT ', #col_names, ')');
SET #query = CONCAT('(SELECT * FROM ', #table_schema, '.', #table_name,
' INTO OUTFILE \'/tmp/your_csv_file.csv\'
FIELDS ENCLOSED BY \'\\\'\' TERMINATED BY \'\t\' ESCAPED BY \'\'
LINES TERMINATED BY \'\n\')');
/* Concatenates column names to query */
SET #sql = CONCAT(#cols, ' UNION ALL ', #query);
/* Resets Group Contact Max Limit back to original value */
SET SESSION group_concat_max_len = #default_group_concat_max_len;
PREPARE stmt FROM #sql;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;
I would like to add to the answer provided by Sangam Belose. Here's his code:
select ('id') as id, ('time') as time, ('unit') as unit
UNION ALL
SELECT * INTO OUTFILE 'C:/Users/User/Downloads/data.csv'
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY '\n'
FROM sensor
However, if you have not set up your "secure_file_priv" within the variables, it may not work. For that, check the folder set on that variable by:
SHOW VARIABLES LIKE "secure_file_priv"
The output should look like this:
mysql> show variables like "%secure_file_priv%";
+------------------+------------------------------------------------+
| Variable_name | Value |
+------------------+------------------------------------------------+
| secure_file_priv | C:\ProgramData\MySQL\MySQL Server 8.0\Uploads\ |
+------------------+------------------------------------------------+
1 row in set, 1 warning (0.00 sec)
You can either change this variable or change the query to output the file to the default path showing.
MySQL alone isn't enough to do this simply. Below is a PHP script that will output columns and data to CSV.
Enter your database name and tables near the top.
<?php
set_time_limit( 24192000 );
ini_set( 'memory_limit', '-1' );
setlocale( LC_CTYPE, 'en_US.UTF-8' );
mb_regex_encoding( 'UTF-8' );
$dbn = 'DB_NAME';
$tbls = array(
'TABLE1',
'TABLE2',
'TABLE3'
);
$db = new PDO( 'mysql:host=localhost;dbname=' . $dbn . ';charset=UTF8', 'root', 'pass' );
foreach( $tbls as $tbl )
{
echo $tbl . "\n";
$path = '/var/lib/mysql/' . $tbl . '.csv';
$colStr = '';
$cols = $db->query( 'SELECT COLUMN_NAME AS `column` FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = "' . $tbl . '" AND TABLE_SCHEMA = "' . $dbn . '"' )->fetchAll( PDO::FETCH_COLUMN );
foreach( $cols as $col )
{
if( $colStr ) $colStr .= ', ';
$colStr .= '"' . $col . '"';
}
$db->query(
'SELECT *
FROM
(
SELECT ' . $colStr . '
UNION ALL
SELECT * FROM ' . $tbl . '
) AS sub
INTO OUTFILE "' . $path . '"
FIELDS TERMINATED BY ","
ENCLOSED BY "\""
LINES TERMINATED BY "\n"'
);
exec( 'gzip ' . $path );
print_r( $db->errorInfo() );
}
?>
You'll need this to be the directory you'd like to output to. MySQL needs to have the ability to write to the directory.
$path = '/var/lib/mysql/' . $tbl . '.csv';
You can edit the CSV export options in the query:
INTO OUTFILE "' . $path . '"
FIELDS TERMINATED BY ","
ENCLOSED BY "\""
LINES TERMINATED BY "\n"'
At the end there is an exec call to GZip the CSV.
I had no luck with any of these, so after finding a solution, I wanted to add it to the prior answers. Python==3.8.6 MySQL==8.0.19
(Forgive my lack of SO formatting foo. Somebody please clean up.)
Note a couple of things:
First, the query to return column names is unforgiving of punctuation. Using ` backticks or leaving out ' quote around the 'schema_name' and 'table_name' will throw an "unknown column" error.
WHERE TABLE_SCHEMA = 'schema' AND TABLE_NAME = 'table'
Second, the column header names return as a single-entity tuple with all the column names concatenated in one quoted string. Convert to quoted list was easy, but not intuitive (for me at least).
headers_list = headers_result[0].split(",")
Third, cursor must be buffered or the "lazy" thing will not fetch your results as you need them. For very large tables, memory could be an issue. Perhaps chunking would solve that problem.
cur = db.cursor(buffered=True)
Last, all types of UNION attempts yielded errors for me. By zipping the whole mess into a list of dicts, it became trivial to write to a csv, using csv.DictWriter.
headers_sql = """
SELECT
GROUP_CONCAT(CONCAT(COLUMN_NAME) ORDER BY ORDINAL_POSITION)
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_SCHEMA = 'schema' AND TABLE_NAME = 'table';
""""
cur = db.cursor(buffered=True)
cur.execute(header_sql)
headers_result = cur.fetchone()
headers_list = headers_result[0].split(",")
rows_sql = """ SELECT * FROM schema.table; """"
data = cur.execute(rows_sql)
data_rows = cur.fetchall()
data_as_list_of_dicts = [dict(zip(headers_list, row)) for row in data_rows]
with open(csv_destination_file, 'w', encoding='utf-8') as destination_file_opened:
dict_writer = csv.DictWriter(destination_file_opened, fieldnames=headers_list)
dict_writer.writeheader() for dict in dict_list:
dict_writer.writerow(dict)
Solution using python but no need to install a python package to read sql files if you already use another tool.
If you are not familiar with python you can run the python codes in a colab notebook, all the required packages are already installed. It automates Matt and Joe's solutions.
Firstly execute this SQL script to get a csv with all table names :
SELECT TABLE_NAME
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_TYPE = 'BASE TABLE' AND TABLE_SCHEMA='your_schema'
INTO OUTFILE 'C:/ProgramData/MySQL/MySQL Server 8.0/Uploads/tables.csv';
Then move tables.csv to a suitable directory and execute this python code after having replaced 'path_to_tables' and 'your_schema'. It will generate a sql script to export all tables headers:
import pandas as pd
import os
tables = pd.read_csv('tables.csv',header = None)[0]
text_file = open("export_headers.sql", "w")
schema = 'your_schema'
sql_output_path = 'C:/ProgramData/MySQL/MySQL Server 8.0/Uploads/'
for table in tables :
path = os.path.join(sql_output_path,'{}_header.csv'.format(table))
string = "(select GROUP_CONCAT(COLUMN_NAME)\nfrom INFORMATION_SCHEMA.COLUMNS\nWHERE TABLE_NAME = '{}'\nAND TABLE_SCHEMA = '{}'\norder BY ORDINAL_POSITION)\nINTO OUTFILE '{}';".format(table,schema,path)
n = text_file.write(string)
n = text_file.write('\n\n')
text_file.close()
Then execute this python code which will generate a sql script to export the values of all tables:
text_file = open("export_values.sql", "w")
for table in tables :
path = os.path.join(sql_output_path,'{}.csv'.format(table))
string = "SELECT * FROM {}.{}\nINTO OUTFILE '{}';".format(schema,table,path)
n = text_file.write(string)
n = text_file.write('\n\n')
text_file.close()
Execute the two generated sql scripts and move the header csvs and values csvs in directories of your choice.
Then execute this last python code :
#Respectively the path to the headers csvs, the values csv and the path where you want to put the csvs with headers and values combined
headers_path, values_path, tables_path = '', '', ''
for table in tables :
header = pd.read_csv(os.path.join(headers_path,'{}_header.csv'.format(table)))
df = pd.read_csv(os.path.join(values_path,'{}.csv'.format(table)),names = header.columns,sep = '\t')
df.to_csv(os.path.join(tables_path,'{}.csv'.format(table)),index = False)
Then you got all your table exported in csv with the headers without having to write or copy paste all the tables and columns names.
Inspired by pivot table example from Rick James.
SET #CSVTABLE = 'myTableName',
#CSVBASE = 'databaseName',
#CSVFILE = '/tmp/filename.csv';
SET #sql = (SELECT CONCAT("SELECT ", GROUP_CONCAT(CONCAT('"', COLUMN_NAME, '"')), " UNION SELECT * FROM ", #CSVBASE, ".", #CSVTABLE) FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME=#CSVTABLE AND TABLE_SCHEMA=#CSVBASE);
prepare stmt from CONCAT(#sql, " INTO OUTFILE '", #CSVFILE, "' FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '\"' LINES TERMINATED BY '\\n';");
execute stmt;
It gets list of columns from INFORMATION_SCHEMA.COLUMNS table, and uses GROUP_CONCAT to prepare SELECT statement with list of strings with column names.
Next UNION is added with SELECT * FROM specified database.table - this creates query text that will output both column names and column values in result.
Now the statement is prepared using previously created query (stored in #sql variable), CSV output specific "things" are appended to query and finally statement is executed with execute stmt
SELECT 'ColName1', 'ColName2', 'ColName3'
UNION ALL
SELECT ColName1, ColName2, ColName3
FROM YourTable
INTO OUTFILE 'c:\\datasheet.csv' FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' LINES TERMINATED BY '\n'