LOAD DATA INFILE does not work - mysql

I am running MySQL on my Ubuntu machine. I checked /etc/mysql/my.cnf file, it shows my database temporary directory:
...
basedir = /usr
datadir = /var/lib/mysql
tmpdir = /tmp
...
As it shows, my MySQL server temporary directory is /tmp .
I have a students.dat file, the content of this file is like following:
...
30 kate name
31 John name
32 Bill name
33 Job name
...
I copied the above students.dat file to /tmp directory. Then, I run the following command to load the data from students.dat file to the students table in my database:
LOAD DATA INFILE '/tmp/students.dat'
INTO TABLE school_db.students
FIELDS TERMINATED BY '\t'
LINES TERMINATED BY '\n'
(student_id, name, attribute)
But I got the error message in MySQL console:
ERROR 29 (HY000): File '/tmp/students.dat' not found (Errcode: 13)
Why mysql can not find the students.dat file though the file is under mysql temporary directory?
P.S.
The students table is like following (there are already 4 records in the table before run the LOAD DATA INFILE... query):
mysql> describe students;
+-------------------+--------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------------------+--------------+------+-----+---------+-------+
| student_id | int(11) | YES | | NULL | |
| name | varchar(255) | YES | MUL | NULL | |
| attribute | varchar(12) | YES | MUL | NULL | |
| teacher_id | int(11) | YES | | NULL | |
+-------------------+--------------+------+-----+---------+-------+
4 rows in set (0.00 sec)

Have a look at the sixth post from file not found error. It seems if you specify LOAD DATA LOCAL INFILE should work (They added the LOCAL keyword)

ERROR 29 (HY000): File '/tmp/file_name' not found (Errcode: 13)
This error occurs mainly when we try to load data file from any location to any table in mysql database.
Just change the owner of a file.
1) Check permissions of the file with this command:
ls -lhrt <filename>
2) Then change ownership:
chown mysql.mysql <filename>
3) Now try LOAD DATA INFILE command. It will work.

Related

Linux Bash to mysql: Select or delete record with external value

I have a MariaDB (mysql) database with a table which looks like this:
MariaDB [DevicesPool]> show columns from Dirs_and_Names;
+-----------------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-----------------+--------------+------+-----+---------+----------------+
| LNR | int(11) | NO | PRI | NULL | auto_increment |
| D_Nr | varchar(2) | NO | | NULL | |
| Filename | varchar(100) | NO | | NULL | |
| Path_and_File | varchar(250) | NO | | NULL | |
+-----------------+--------------+------+-----+---------+----------------+
4 rows in set (0.000 sec)
Background: In this table are stored all my video files, collected from different devices.
My aim is to collect video files within this table to move to another storage device,
or delete it including deletion of or change the record in the table.
My idea is to collect a group of video files in a (temporary) list and to start a batch then.
But first I made some basic tests forwarding parameters, which don't succeed.
First, I'm using batch to put a number into a string (id) which contains, e.g. 218.
Then I would like to show a record with the mysql -e command to SELECT or DELETE the record which LNR conains the id, which I previously assigned in batch.
Something like this ($id within quotes):
id=218; mysql -uuser -psecret -e 'use DevicesPool; SELECT * FROM Dirs_and_Files WHERE LNR="$id";'
The query gives no result though within mysql environment the record exists.
MariaDB [DevicesPool]> SELECT * FROM Verz_und_Namen WHERE LNR=218;
+-------+------+------------------+----------------------------------------------------+
| LNR | F_Nr | Fileiname | Path_and_File |
+-------+------+------------------+----------------------------------------------------+
| 218 | 01 | King_Kong.mp4 | /home/user/Movies/King_Kong.mp4 |
+-------+------+------------------+----------------------------------------------------+
1 row in set (0.001 sec)
When I execute this (no quotes for $id):
id=218; mysql -uuser -psecret -e 'use DevicesPool; SELECT * FROM Dirs_and_Names WHERE LNR=$id;'
then I receive an error:
ERROR 1054 (42S22) at line 1: Unknown column '$id' in 'where clause'
So, my question here:
How can I use batch external variables (strings?) to be taken as parameters within mysql -e command? Exactly, how can I select or delete a record with an external assigned id?
Thank you in advance.
-Linuxfluesterer
Single quotes are your issue
id=218; mysql -uuser -psecret -e "use DevicesPool; SELECT * FROM Dirs_and_Names WHERE LNR=$id;"
# or
id=218; mysql -uuser -psecret -e 'use DevicesPool; SELECT * FROM Dirs_and_Names WHERE LNR='$id';'
# or
id=218
query="SELECT * FROM Dirs_and_Names WHERE LNR=$id;"
mysql -uuser -psecret --database DevicesPool -e "$query"
After you semi-colon ";" the variable vanishes unless you export it using
export id=218; mysql -uuser -psecret -e "use DevicesPool; SELECT * FROM Dirs_and_Names WHERE LNR=$id;"
Please also note the double quotes wrapping the SQL statement, for pash to substitue the variable.

LOAD DATA LOCAL INFILE fails only on the first record's key (auto_increment) -- bug?

Here's the content of a very simple text file to show my problem. Somehow, I end up using "|" as the separator. But that's not the problem...
10|Antonio
11|Carolina
12|Diana
13|Alejandro
Here is the code I use to create that very simple table and to load the file into it.
CREATE TABLE IF NOT EXISTS names
(
id INT NOT NULL PRIMARY KEY AUTO_INCREMENT,
name VARCHAR(100)
);
LOAD DATA LOCAL INFILE
'C:\\Users\\Carolina\\Desktop\\Tmp\\TablasCSV\\names.csv'
INTO TABLE names
CHARACTER SET 'utf8'
FIELDS TERMINATED BY '|'
LINES TERMINATED BY '\r\n';
Here is the result of a simple select:
mysql> SELECT * FROM names;
+----+-----------+
| id | name |
+----+-----------+
| 1 | Antonio |
| 11 | Carolina |
| 12 | Diana |
| 13 | Alejandro |
+----+-----------+
4 rows in set (0.00 sec)
It has worked fine for me until I encounter the case, in which the first record value for the "id" was not "1".
I always get "1" after the load.
Any one has noticed this problem?
Is this a bug?
So far, I am fixing the record using an UPDATE command after the load, but I don't like it!!!!

Can't get load_file() working in mysql

I've been struggling for 2 nights now to get load_file() to work, but the results are NULL.
I run version "5.6.19-0ubuntu0.14.04.1".
Example:
mysql> show variables like '%secure%';
+------------------+-------------+
| Variable_name | Value |
+------------------+-------------+
| secure_auth | ON |
| secure_file_priv | /root/load/ |
+------------------+-------------+
mysql> show variables like 'max_allowed%';
+--------------------+----------+
| Variable_name | Value |
+--------------------+----------+
| max_allowed_packet | 16777216 |
+--------------------+----------+
mysql> desc xmlDocs;
+-------------+-------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------------+-------------+------+-----+---------+-------+
| fileName | varchar(30) | NO | PRI | NULL | |
| server | varchar(20) | NO | | NULL | |
| doc_content | blob | NO | | NULL | |
+-------------+-------------+------+-----+---------+-------+
mysql> insert into xmlDocs values ('test','test',load_file('/root/load/test.xml'));
ERROR 1048 (23000): Column 'doc_content' cannot be null
File permissions:
drwxrwxr-x 5 mysql mysql 4096 Nov 24 08:18 .
drwx------ 6 root root 4096 Nov 24 08:33 ..
drwxr--r-- 5 root root 4096 Nov 22 16:24 EU1
drwxr--r-- 5 root root 4096 Nov 22 16:26 server
-rwxrwxrwx 1 mysql mysql 83440 Nov 24 08:18 test.xml
drwxr--r-- 5 root root 4096 Nov 22 16:24 US1
Checked:
MySql has execute and even owns the dir
Mysql ows the file
DB user = root
Filesize < max_allowed_packets
secure-file-priv is set
I have no apparmor running
Without setting secure-file-priv it can read non important data like /etc/passwd effortlessly :P. Also from "/" i can import, but nowhere else.
When setting secure-file-priv, I can get it to work from "/" only!
Same file, no secure-file-priv set:
mysql> insert into xmlDocs values ('test','test',load_file("/root/load/test.xml"));
ERROR 1048 (23000): Column 'doc_content' cannot be null
mysql> insert into xmlDocs values ('test','test',load_file("/etc/test.xml"));
Query OK, 1 row affected (0.00 sec)
Any ideas?
Typical ... been looking for 2 days and 5 minutes after posting it here i figured it out.
The file to be loaded, not just the folder it's located in must have execute ... EVERY folder in it's path up to "/" must have execute!

dbf2mysql not inserting records

I am using dbf2mysql library http://manpages.ubuntu.com/manpages/natty/man1/dbf2mysql.1.html to port some data to mysql, but when i try to view the inserted records nothing is inserted.
Here is the command I am running:
$ dbf2mysql -vvv -q -h localhost -P password -U root smb/C_clist.DBF -d opera_dbf -t pricelists -c
Opening dbf-file smb/C_clist.DBF
dbf-file: smb/C_clist.DBF - Visual FoxPro w. DBC, MySQL-dbase: opera_dbf, MySQL-table: pricelists
Number of records: 12
Name Length Display Type
-------------------------------------
CL_CODE 8 0 C
CL_DESC 30 0 C
CL_CURR 3 0 C
CL_FCDEC 1 0 N
Making connection to MySQL-server
Dropping original table (if one exists)
Building CREATE-clause
Sending create-clause
CREATE TABLE pricelists (CL_CODE varchar(8) not null,
CL_DESC varchar(30) not null,
CL_CURR varchar(3) not null,
CL_FCDEC int not null)
fields in dbh 4, allocated mem for query 279, query size 139
Inserting records
Inserting record 0
LOAD DATA LOCAL INFILE '/tmp/d2mygo04TM' REPLACE INTO table pricelists fields terminated by ',' enclosed by ''''
Closing up....
then in mysql, the tables are created with the correct fields types, but no data:
mysql> use opera_dbf;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed
mysql> describe pricelists;
+----------+-------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+----------+-------------+------+-----+---------+-------+
| CL_CODE | varchar(8) | NO | | NULL | |
| CL_DESC | varchar(30) | NO | | NULL | |
| CL_CURR | varchar(3) | NO | | NULL | |
| CL_FCDEC | int(11) | NO | | NULL | |
+----------+-------------+------+-----+---------+-------+
4 rows in set (0.13 sec)
mysql> select * from pricelists;
Empty set (0.00 sec)
mysql>
What am I missing?
I removed the -q option and it works
-q dbf2mysql: "Quick" mode. Inserts data via temporary file using
'LOAD DATA INFILE' MySQL statement. This increased insertion
speed on my PC 2-2.5 times. Also note that during whole 'LOAD
DATA' affected table is locked.

phpmyAdmin gives error while database and tables look good in mySQL CLI

I changed a name of the table from within phpMyAdmin, and immediately it crapped. after that when I try to connect using phpMyAdmin (/phpMyAdmin/index.php) I get error in log:
[Wed Aug 08 14:18:58 2012] [error] Query call failed: Table 'mydb.mychangedtbl' doesn't exist (1146)
mychangedtbl is the table whose name was changed. this issue is only in phpMyAdmin, I am able to access the database and tables find from CLI. I restarted mySQL, but that did not fix. Seems like something is stuck for phpMyAdmin. I restarted browser also but that didnt help either.
when i rename this particular table back to what it was using command line, myphphAmin works fine again. here is the structure of this table:
mysql> DESCRIBE mychangedtbl;
+-----------+-------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-----------+-------------+------+-----+---------+-------+
| userid | char(6) | NO | PRI | NULL | |
| userpass | varchar(40) | NO | | NULL | |
| userlevel | char(3) | NO | | o | |
| userpcip | varchar(45) | NO | | NULL | |
+-----------+-------------+------+-----+---------+-------+
4 rows in set (0.00 sec)
mysql>
column userpass has Collation = asci_bin which it does not show in above output, other columns are ascii_general_ci
pl advice.
ty.
Rajeev
this was due to the reason that, apache was using the same table to do mysql authentication. i changed apache config and restart. that let me change table name. all good again.