I've tried several ways to convert the SQL file found here into an SQLite file. Ways include this script (which produces the table properly in SQLite but empty), this script, and I've tried outputting the data from MySQL as CSV which I then tried to import into SQLite, getting apparently the wrong amount of columns...
Is there no easy way to convert from MySQL to SQLite?
Related
I can't for the life of me make MySQL Workbench import NULL for blank cells in a CSV.
I've tried:
blank
NULL
\N
Each with and without ""
How the does one signify a cell is 'null' inside a CSV file I want to import into MySQL via Workbench?
Since MySQL Workbench version 8.0.16 (released on 04/25/2019) there has been an additional option for uploading .csv file -- "null and NULL word as SQL keyword". When selecting this option as YES, the NULL expression without quotes in .csv file (,NULL, rather than ,"NULL",) will work for this purpose.
You can solve your problem by updating your workbench to the newest.
If you were trying to import a CSV into the model for creating insert scripts (where you won't get the same options described in the other answers), try the following:
\func NULL
You can also use this syntax to call functions like the following to insert the current date time when you forward engineer the model to the database:
\func CURRENT_TIMESTAMP
I suggest switching to Navicat for MySQL -- if only for the csv table import wizard. After repeated attempts to get the MySQLWorkbench Import Wizard to work for my CSV file, I gave up. I recalled using Navicat in an early project and it worked flawlessly. For this import, I was able to load my 15000+ row CSV with multiple null datetime values into my MySQL table without a problem.
Weird that MySQLWorkbench still hasn't solved this annoyance. But Navicat provides an easy alternative until they do.
I got a Problem, cause I'm totally new to sql and have to kinda learn it in an internship. So I had to import huge txt files into a database in phpmyadmin (took me for ever but managed it with load data infile). Now my task is to find a way to control if the data of the tables is the same as the data of the given txt files.
Is there any possibility to do so ?
Have you tried exporting the data through phpMyAdmin using a different file format instead of .sql? phpMyAdmin gives you several choices including CSV, OpenOffice spreadsheets. That would make your compare easier. You could use Excel, sort the data and you'd have a quicker compare.
the best way to do so is to load, and then extract.
Compare your extract with the original file.
Another way could be to count the number of line in both table and file. And extract few lines, and verify that they both exists. This is less precise.
But this has nothing to do with SQL. It is just a test logic.
i've a MSSQL database and trying to migrate to MySQL database.. the problem is when I using MySQL WorkBench, some table records in my MSSQL database is not migrated (there is an error and MySQL Workbench not responding).. is there any tools to export MSSQL table records into SQL file that compatible to be executed in MySQL?
Both T-SQL and MySQL and support VALUES clause. So, right click on the database and select Generate scripts from Tasks:
Then you can choose objects:
and then make sure you have selected to get the data, too:
You can even get the schema and change it a little bit to match the MySQL syntax.
For small amount of data this is pretty cool. If you are exporting large tables it will be better to use another tool. For example, using bcp you can export your data in CSV format and then import it in the MySQL database.
I have a simple yet for me difficult problem. I am encoding some integer data into binary type in a python script, and now I would like to load this binary data into a mysql table. I'm using pymysql to connect with the mysql database.
I've tried dropping the encoding in python, and leaving it to mysql to load it into a blob file. Here mysql uses a utf-8 encoding, which I'm not interested in. Maybe I can change the way mysql encodes the blob files?
Or better yet, can I just use the encoded data that I already have, and load it into the mysql table directly
Thanks!
I received over 100GB of data with 67million records from one of the retailers. My objective is to do some market-basket analysis and CLV. This data is a direct sql dump from one of the tables with 70 columns. I'm trying to find a way to extract information from this data as managing itself in a small laptop/desktop setup is becoming time consuming. I considered the following options
Parse the data and convert the same to CSV format. File size might come down to around 35-40GB as more than half of the information in each records is column names. However, I may still have to use a db as I cant use R or Excel with 66 million records.
Migrate the data to mysql db. Unfortunately I don't have the schema for the table and I'm trying to recreate the schema looking at the data. I may have to replace to_date() in the data dump to str_to_date() to match with MySQL format.
Are there any better way to handle this? All that I need to do is extract the data from the sql dump by running some queries. Hadoop etc. are options, but I dont have the infrastructure to setup a cluster. I'm considering mysql as I have storage space and some memory to spare.
Suppose I go in the MySQL path, how would I import the data? I'm considering one of the following
Use sed and replace to_date() with appropriate str_to_date() inline. Note that, I need to do this for a 100GB file. Then import the data using mysql CLI.
Write python/perl script that will read the file, convert the data and write to mysql directly.
What would be faster? Thank you for your help.
In my opinion writing a script will be faster, because you are going to skip the SED part.
I think that you need to setup a server on a separate PC, and run the script from your laptop.
Also use tail to faster get a part from the bottom of this large file, in order to test your script on that part before you run it on this 100GB file.
I decided to go with the MySQL path. I created the schema looking at the data (had to increase a few of the column size as there were unexpected variations in the data) and wrote a python script using MySQLdb module. Import completed in 4hr 40mins on my 2011 MacBook Pro with 8154 failures out of 67 million records. Those failures were mostly data issues. Both client and server are running on my MBP.
#kpopovbg, yes, writing script was faster. Thank you.