I am using dbf2mysql library http://manpages.ubuntu.com/manpages/natty/man1/dbf2mysql.1.html to port some data to mysql, but when i try to view the inserted records nothing is inserted.
Here is the command I am running:
$ dbf2mysql -vvv -q -h localhost -P password -U root smb/C_clist.DBF -d opera_dbf -t pricelists -c
Opening dbf-file smb/C_clist.DBF
dbf-file: smb/C_clist.DBF - Visual FoxPro w. DBC, MySQL-dbase: opera_dbf, MySQL-table: pricelists
Number of records: 12
Name Length Display Type
-------------------------------------
CL_CODE 8 0 C
CL_DESC 30 0 C
CL_CURR 3 0 C
CL_FCDEC 1 0 N
Making connection to MySQL-server
Dropping original table (if one exists)
Building CREATE-clause
Sending create-clause
CREATE TABLE pricelists (CL_CODE varchar(8) not null,
CL_DESC varchar(30) not null,
CL_CURR varchar(3) not null,
CL_FCDEC int not null)
fields in dbh 4, allocated mem for query 279, query size 139
Inserting records
Inserting record 0
LOAD DATA LOCAL INFILE '/tmp/d2mygo04TM' REPLACE INTO table pricelists fields terminated by ',' enclosed by ''''
Closing up....
then in mysql, the tables are created with the correct fields types, but no data:
mysql> use opera_dbf;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed
mysql> describe pricelists;
+----------+-------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+----------+-------------+------+-----+---------+-------+
| CL_CODE | varchar(8) | NO | | NULL | |
| CL_DESC | varchar(30) | NO | | NULL | |
| CL_CURR | varchar(3) | NO | | NULL | |
| CL_FCDEC | int(11) | NO | | NULL | |
+----------+-------------+------+-----+---------+-------+
4 rows in set (0.13 sec)
mysql> select * from pricelists;
Empty set (0.00 sec)
mysql>
What am I missing?
I removed the -q option and it works
-q dbf2mysql: "Quick" mode. Inserts data via temporary file using
'LOAD DATA INFILE' MySQL statement. This increased insertion
speed on my PC 2-2.5 times. Also note that during whole 'LOAD
DATA' affected table is locked.
Related
I have 120 tables in my project.
Now I have to migrate MSSQL to MySQL.
So I did all Queries to create those tables that are already worked.
Now my problem is when I execute this script in MSSQL it completes within a second.
But MySQL takes around 4 min to complete its execution.
I want to improve my performance in MySQL. But I don't know how to do that if anyone knows please help me.
Thank you
Here is my sample table Script
MySQL
CREATE TABLE `rb_tbl_bak` (
`BakPathId` int NOT NULL AUTO_INCREMENT,
`BakPath` varchar(500) CHARACTER SET utf8 COLLATE utf8_general_ci DEFAULT NULL,
`BakDate` datetime(3) DEFAULT NULL,
PRIMARY KEY (`BakPathId`)
) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci;
MSSQL
--Create table and its columns
CREATE TABLE [dbo].[RB_Tbl_Bak] (
[BakPathId] [int] NOT NULL IDENTITY (1, 1),
[BakPath] [nvarchar](500) NULL,
[BakDate] [datetime] NULL);
GO
like this way, I have to complete for 120+ tables
Oh well, In this case, MySQL databases take time.
You can turn on profiling to get an idea of what takes so long. An example is given using Mysql's CLI:-
SET profiling = 1;
CREATE TABLE rb_tbl_back (id BIGINT UNSIGNED NOT NULL PRIMARY KEY);
SET profiling = 1;
You should get a response like this:-
mysql> SHOW PROFILES;
| Query_ID | Duration | Query |
+----------+------------+-------------------------------------------------------------+
| 1 | 0.00913800 | CREATE TABLE rb_tbl_back (id BIGINT UNSIGNED NOT NULL PRIMARY KEY) |
+----------+------------+-------------------------------------------------------------+
1 row in set (0.00 sec)
mysql> SHOW PROFILE FOR QUERY 1;
+----------------------+----------+
| Status | Duration |
+----------------------+----------+
| starting | 0.000071 |
| checking permissions | 0.000007 |
| Opening tables | 0.001698 |
| System lock | 0.000043 |
| creating table | 0.007260 |
| After create | 0.000004 |
| query end | 0.000004 |
| closing tables | 0.000015 |
| freeing items | 0.000031 |
| logging slow query | 0.000002 |
| cleaning up | 0.000003 |
+----------------------+----------+
11 rows in set (0.00 sec)
If you read the profiling documentation, there are other flags for showing the profile of the query CPU, BLOCK IO, etc that might help you on the 'creating table' stage.
I got this answer from here
When I'm trying to import csv into MySQL table, I'm getting an error
Data too long for column 'incident' at row 1
I'm sure the values are not higher than varchar(12). But, still I'm getting the error.
MariaDB [pagerduty]>
LOAD DATA INFILE '/var/lib/mysql/pagerduty/script_output.csv'
REPLACE INTO TABLE incidents
ignore 1 lines;
ERROR 1406 (22001): Data too long for column 'incident' at row 1
MariaDB [pagerduty]>
LOAD DATA INFILE '/var/lib/mysql/pagerduty/script_output.csv'
INTO TABLE incidents
ignore 1 lines;
ERROR 1406 (22001): Data too long for column 'incident' at row 1
While trying with REPLACE, the data is uploading only one column(which set on primary key)
MariaDB [pagerduty]>
LOAD DATA INFILE '/var/lib/mysql/pagerduty/script_output.csv'
IGNORE INTO TABLE incidents
ignore 1 lines;
Query OK, 246 rows affected, 1968 warnings (0.015 sec)
Records: 246 Deleted: 0 Skipped: 0 Warnings: 1968
**Columns:**
+----------------+--------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+----------------+--------------+------+-----+---------+-------+
| incident | varchar(12) | NO | PRI | NULL | |
| description | varchar(300) | YES | | NULL | |
| status | varchar(12) | YES | | NULL | |
| urgency | varchar(7) | YES | | NULL | |
| service | varchar(27) | YES | | NULL | |
| trigger | varchar(25) | YES | | NULL | |
| team | varchar(20) | YES | | NULL | |
| incident_start | datetime(6) | YES | | NULL | |
| incident_end | datetime(6) | YES | | NULL | |
| resolved_by | varchar(20) | YES | | NULL | |
+----------------+--------------+------+-----+---------+-------+
10 rows in set (0.003 sec)
By default, MySQL looks for a TAB character to separate values. Your file is using a comma, so MySQL reads the entire line and assumes it is the value for the first column only.
You need to tell MySQL that the column terminator is a comma, and while you're at it, tell it about the enclosing double quotes.
Try this:
LOAD DATA INFILE '/var/lib/mysql/pagerduty/script_output.csv' REPLACE INTO TABLE incidents
columns terminated by ','
optionally enclosed by '"'
ignore 1 lines;
Reference
If you THINK your data appears ok, and its still nagging about the too long data, how about creating a new temporary table structure and set your first column incident to a varchar( 100 ) just for grins... maybe even a few others if they too might be causing a problem.
Import the data to THAT table to see if same error or not.
If no error, then check the maximum trimmed length of the data in respective columns and analyze the data itself... bad format, longer than expected, etc.
Once resolved, then you can pull into production once you have figured it out. You could also always PRE-LOAD the data into this larger table structure, truncate it before each load so no dups via primary key on a fresh load.
I have had to do that in the past, also was efficient for pre-qualifying lookup table IDs for new incoming data. Additionally could apply data cleansing in the temp table before pulling into production.
GOAL: to output all SQL queries and their outputs into a text file
SQL CODE:
\W /*enable warnings*/
USE bookdb; /*doesn't exist because I WILL DROP DATABASE booksdb BEFORE RUNNING THIS SCRIPT (to avoid duplicate entry errors from testing out the queries during the assignment)*/
/* Query 0 */
SELECT user(), current_date(), version(), ##sql_mode\G
/*Query 1*/
CREATE DATABASE IF NOT EXISTS bookdb;
Use bookdb;
/*QUERY 2*/
CREATE TABLE books (
isbn CHAR(10),
author VARCHAR(100) NOT NULL,
title VARCHAR(128) NOT NULL,
price DECIMAL(7 , 2 ) NOT NULL,
subject VARCHAR(30) NOT NULL,
PRIMARY KEY (isbn)
)ENGINE = INNODB;
/*QUERY 3*/
INSERT INTO books
VALUES ('0345377648', 'Anne Rice', 'Lasher', 14.00, 'FICTION');
INSERT INTO books
VALUES ('1557044287','Ridley Scott','Gladiator',26.36,'FICTION');
INSERT INTO books
VALUES ('0684856093', 'Sean Covey', 'The 7 Habits', 12, 'CHILDREN');
/*QUERY 4*/
SHOW TABLES;
/*QUERY 5*/
DESC books;
/*QUERY 6*/
SELECT * FROM books;
/*QUERY 7*/
SELECT ISBN, title, price FROM books;
COMMANDS FROM SQL PROMPT:
mysql> tee /my_scripts/yourname_assignment1.txt
mysql> source /my_scripts/yourname_assignment1.sql
mysql> notee
RESULTING assignment1.txt FILE:
user(): root#localhost
current_date(): 2016-09-25
version(): 5.7.15-0ubuntu0.16.04.1
##sql_mode: ONLY_FULL_GROUP_BY,STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_ FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION
1 row in set (0.00 sec)
Query OK, 1 row affected (0.00 sec)
Database changed
Query OK, 0 rows affected (0.32 sec)
Query OK, 1 row affected (0.06 sec)
Query OK, 1 row affected (0.04 sec)
Query OK, 1 row affected (0.08 sec)
+------------------+
| Tables_in_bookdb |
+------------------+
| books |
+------------------+
1 row in set (0.00 sec)
+---------+--------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+---------+--------------+------+-----+---------+-------+
| isbn | char(10) | NO | PRI | NULL | |
| author | varchar(100) | NO | | NULL | |
| title | varchar(128) | NO | | NULL | |
| price | decimal(7,2) | NO | | NULL | |
| subject | varchar(30) | NO | | NULL | |
+---------+--------------+------+-----+---------+-------+
5 rows in set (0.00 sec)
+------------+--------------+--------------+-------+----------+
| isbn | author | title | price | subject |
+------------+--------------+--------------+-------+----------+
| 0345377648 | Anne Rice | Lasher | 14.00 | FICTION |
| 0684856093 | Sean Covey | The 7 Habits | 12.00 | CHILDREN |
| 1557044287 | Ridley Scott | Gladiator | 26.36 | FICTION |
+------------+--------------+--------------+-------+----------+
3 rows in set (0.00 sec)
+------------+--------------+-------+
| ISBN | title | price |
+------------+--------------+-------+
| 0345377648 | Lasher | 14.00 |
| 0684856093 | The 7 Habits | 12.00 |
| 1557044287 | Gladiator | 26.36 |
+------------+--------------+-------+
3 rows in set (0.00 sec)
mysql> notee
As you can see, my queries are executed without errors (ie. they are coming out correctly), but the output is not listing the queries themselves.
My professor's "example.txt" file includes the queries that are listed in the assignment1.sql file. In other words, his output apparently includes his queries in the SQL file where my output does not. How do you change the "tee" command to include the queries? Did my professor simply manually edit a copy of the file or am I missing something?
I realize that if I manually enter the commands, my output will look more like his (ie. with the queries), but that's not the way he explained this assignment.
I'm running the latest x64 Ubuntu OS if that might have an effect on the SQL. I'm new to this. I've thoroughly searched online for this specific issue, but no one has this one listed.
Thank you.
ASSIGNMENT DIRECTIONS IF THIS CLARIFIES ANYTHING:
Run the following command:
mysql -u root –p --force --comments –vvv
Use the tee command to put your output in a text file to submit.
mysql> tee c:/my_scripts/yourname_assignment1.txt
Run the SQL script hibrahim_assignment1.sql
mysql> source c:\my_scripts\yourname_assignment1.sql
Type in notee to stop the tee command.
mysql> notee
I fixed it. It was because my Linux machine was not giving permissions correctly. The command
mysql -u root -p --force --comments -vvv
required elevation (and didn't require root). I guess I was missing the --force --comments -vvv hence the queries and comments not being included. So for future reference, you'll need elevation. Furthermore, I didn't need to specify root. So the resulting command is
sudo mysql --force --comments -vvv
I changed a name of the table from within phpMyAdmin, and immediately it crapped. after that when I try to connect using phpMyAdmin (/phpMyAdmin/index.php) I get error in log:
[Wed Aug 08 14:18:58 2012] [error] Query call failed: Table 'mydb.mychangedtbl' doesn't exist (1146)
mychangedtbl is the table whose name was changed. this issue is only in phpMyAdmin, I am able to access the database and tables find from CLI. I restarted mySQL, but that did not fix. Seems like something is stuck for phpMyAdmin. I restarted browser also but that didnt help either.
when i rename this particular table back to what it was using command line, myphphAmin works fine again. here is the structure of this table:
mysql> DESCRIBE mychangedtbl;
+-----------+-------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-----------+-------------+------+-----+---------+-------+
| userid | char(6) | NO | PRI | NULL | |
| userpass | varchar(40) | NO | | NULL | |
| userlevel | char(3) | NO | | o | |
| userpcip | varchar(45) | NO | | NULL | |
+-----------+-------------+------+-----+---------+-------+
4 rows in set (0.00 sec)
mysql>
column userpass has Collation = asci_bin which it does not show in above output, other columns are ascii_general_ci
pl advice.
ty.
Rajeev
this was due to the reason that, apache was using the same table to do mysql authentication. i changed apache config and restart. that let me change table name. all good again.
I am running MySQL on my Ubuntu machine. I checked /etc/mysql/my.cnf file, it shows my database temporary directory:
...
basedir = /usr
datadir = /var/lib/mysql
tmpdir = /tmp
...
As it shows, my MySQL server temporary directory is /tmp .
I have a students.dat file, the content of this file is like following:
...
30 kate name
31 John name
32 Bill name
33 Job name
...
I copied the above students.dat file to /tmp directory. Then, I run the following command to load the data from students.dat file to the students table in my database:
LOAD DATA INFILE '/tmp/students.dat'
INTO TABLE school_db.students
FIELDS TERMINATED BY '\t'
LINES TERMINATED BY '\n'
(student_id, name, attribute)
But I got the error message in MySQL console:
ERROR 29 (HY000): File '/tmp/students.dat' not found (Errcode: 13)
Why mysql can not find the students.dat file though the file is under mysql temporary directory?
P.S.
The students table is like following (there are already 4 records in the table before run the LOAD DATA INFILE... query):
mysql> describe students;
+-------------------+--------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------------------+--------------+------+-----+---------+-------+
| student_id | int(11) | YES | | NULL | |
| name | varchar(255) | YES | MUL | NULL | |
| attribute | varchar(12) | YES | MUL | NULL | |
| teacher_id | int(11) | YES | | NULL | |
+-------------------+--------------+------+-----+---------+-------+
4 rows in set (0.00 sec)
Have a look at the sixth post from file not found error. It seems if you specify LOAD DATA LOCAL INFILE should work (They added the LOCAL keyword)
ERROR 29 (HY000): File '/tmp/file_name' not found (Errcode: 13)
This error occurs mainly when we try to load data file from any location to any table in mysql database.
Just change the owner of a file.
1) Check permissions of the file with this command:
ls -lhrt <filename>
2) Then change ownership:
chown mysql.mysql <filename>
3) Now try LOAD DATA INFILE command. It will work.