Two Database in same project - mysql

what approach should i follow to use two different type of database in the same project for eg. MySql for transaction related queries and MonetDB for analysis purpose ?

You could keep your transactions in MySQL and periodically (e.g. every hour) move data over to MonetDB, e.g. using CSV export. For example, given a table sometable you could do the following in MySQL:
SELECT * FROM sometable
INTO OUTFILE '/tmp/export.csv'
FIELDS TERMINATED BY ','
OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY '\n';
And then in MonetDB:
DELETE FROM sometable;
COPY INTO sometable FROM '/tmp/export.csv' USING DELIMITERS ',','\n','"';
More elaborate setups could also just export the data added during the last day, and then just append on the MonetDB side.

Related

Exporting data to csv from mysql using SELECT INTO OUTFILE exports all columns in a single column

I am trying to export Mysql view data to csv. I have a large set of data in my database tables more than 15 million. I joined each and every table to a view and I just wanted to export it using the following query.
SELECT *
FROM database_name
INTO OUTFILE 'C:/ProgramData/MySQL/MySQL Server 8.0/Uploads/filename_1.csv'
FIELDS TERMINATED BY ','
OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY '\r\n';
My problem is the generated csv file is not ordered. It is exporting all the columns in the view into a single column in csv file. How can I query it to get the data separately in different columns. Please help me with this. Thanks in advance.

How to output MySQL data tables in CSV format?

I need to know how can I export 10 data tables from one database into a csv format with cron job daily?
I know this script:
SELECT *
FROM TABLE NAME
INTO OUTFILE '/var/lib/mysql-files/BACKUP.csv'
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\n';
But how can I in the same line add the another 9 tables?
Best Regards!
You should look into mysqldump with the --tab option. It runs those INTO OUTFILE statements for you, dumping each table into a separate file.
You don't want all the tables in one file, because it would make it very awkward to import later.
Always be thinking about how you will restore a backup. I tell people, "you don't need a backup strategy, you need a restore strategy." Backing up is just a necessary step to restoring.

Showing irrelevant data from other tables in MYSQL workbench

I am trying to import a table that has three columns.
After my import when I do Select * from tablename
I see all irrelevant data from other tables or databases.
Even though I am accessing the table through dbname.tablename format.
Have anyone experienced this situation?
Make sure that the column names of the table in the database and the one in CSV File heading is exactly same. MySQL workbench table data import wizard does not allow you to ignore initial rows.
If still having issues, I would suggest executing following query which runs must faster than the table data import wizard:
load data local infile 'filelocation/filename.csv' into table tablename
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\r\n'
IGNORE 1 LINES
(col1, col2, col3);

MySQL append/insert from a different server

I have a table on the development box with exactly the same format as another one on the production server. The data on the development need to be appended/inserted into the production where I don't have the permission to create a table.
I was thinking about doing something like insert into production_table select * from develop_table, however, since I cannot create a new table develop_table, then this is impossible to do.
I am using Sequal Pro, and I don't know is there a way to export my development table to a file (CSV/SQL), then I can run some command on my client side to load from that file into the production without overwriting the production table?
Assuming your production table has primray / unique key(s), you can export the data in your development server as a .csv file, and load it into your production server with load data, specifying if you want to replace/ignore the duplicated rows.
Example:
In your development server you must export the data to a .csv file. You can use select into... to do that:
select *
into outfile '/home/user_dev/your_table.csv'
fields terminated by ',' optionally enclosed by '"'
lines terminated by '\n'
from your_table;
In your production server, copy the your_table.csv file and load it using load data...:
load data infile '/home/user_prod/your_table.csv'
replace -- This will replace any rows with duplicated primary | unique key values.
-- If you don't want to replace the rows, use "ignore" instead of "replace"
into table your_table
fields terminated by ',' optionally enclosed by '"'
lines terminated by '\n';
Read the reference manual (links provided above) for additional information.

bulk insert into mysql

I have a gridview in asp.net page with a checkbox column. I need to insert those checked rows on the gridview, to the mysql table.
One of the most easiest ways would be to find the selected rows and insert them one by one over loop.
However, it is time-consuming considering there may be 10000 rows at any instance of time. Given that this is time-consuming process, there is a risk of losing the connection on the course of insertion.
Is there any way to expedite insertion of huge number of rows?
Thanks,
Balaji G
You can first get all the checked records, then write them into to tab delimited or comma delimited format, then using syntax from LOAD DATA INFILE do the bulk insertion.
Here's the sample format:
LOAD DATA INFILE 'data.txt' INTO TABLE tbl_name
FIELDS TERMINATED BY ',' ENCLOSED BY '"'
LINES TERMINATED BY '\r\n';