MySQL: create a different partition for each ID inside a trigger - mysql

I read about partitioning in MySQL, and would like to know how to use this in my situation:
I have a project where I store huge csv-files inside a database.
For this I have one table 'csvfiles' and another table 'csvdata'.
Queries are only made on one csv-file at the time, so my idea is to create a new partition in csvdata for each different primary key-value in csvfiles every time a csv-file is added.
Ideally this should be done automatically, inside a trigger would be great.
Could anybody tell me if this is a valid setup, and how this could be realized?

Related

creating a mysql query or job that automatically copies information frmo one table to another, not events

I am looking to create have an automatically re occurring script or job that copies data from one table to another table. I tried events but they dont seem to work given they stop copying data at random despite not having an end date. What other methods within mysql can I use to achieve this?

Check if a record from database exist in a csv file

today I come to you for inspiration or maybe ideas how to solve a task not killing my laptop with massive and repetitive code.
I have a CSV file with around 10k records. I also have a database with respective records in it. I have four fields inside both of these structures: destination, countryCode,prefix and cost
Every time I update a database with this .csv file I have to check if the record with given destination, countryCode and prefix exist and if so, I have to update the cost. That is pretty easy and it works fine.
But here comes the tricky part: there is a possibility that the destination may be deleted from one .csv file to another and I need to be aware of that and delete that unused record from the database. What is the most efficient way of handling that kind of situation?
I really wouldn't want to check every record from the database with every row in a .csv file: that sounds like a very bad idea.
I was thinking about some time_stamp or just a bool variable which will tell me if the record was modified during the last update of the DB BUT: there is also a chance that neither of params within the record change, thus: no need to touch that record and mark it as modified.
For that task, I use Python 3 and mysql.connector lib.
Any ideas and advice will be appreciated :)
If you're keeping a time stamp why do you care if it's updated even if nothing was changed in the record? If the reason is that you want to save the date of the latest update you can add another column saving a time stamp of the last time the record appeared in the csv and afterwords delete all the records that the value of this column in them is smaller than the date of the last csv.
If the .CSV is a replacement for the existing table:
CREATE TABLE new LIKE real;
load the .csv into `new` (Probably use LOAD DATA...)
RENAME TABLE real TO old, new TO real;
DROP TABLE old;
If you have good reason to keep the old table and patch it, then...
load the .csv into a table
add suitable indexes
do one SQL to do deletes (no loop needed). It is probably a multi-table DELETE.
do one sql to update the prices (no loop needed). It is probably a multi-table UPDATE.
You can probably do the entire task (either way) without touching Python.

how to import a data model from Excel

I was given an excel (csv) sheet containing a database metadata.
I'm asking if there's a simple way to import the csv and create the tables from there?
Data is not part of this question. the csv looks like this:
logical_table_name, physical_table_name, logical_column_name, physcial_column_name, data_type, data_length
There's about 2000 rows of metadata. I'm hoping I don't have to manually create the tables. Thanks.
I don't know of any direct import or creation. However, if I had to do this and I couldn't find one, I would import the excel file into a staging table (just a direct data import). I'd make add a unique auto ID column to staging table to keep the rows in order.
Then I would use some queries to build table and column creation commands from the raw data. Unless this was something I was setting up to do a lot, I would keep it dead simple, not try and get fancy. Build individual add column commands for each column. Build a create Table command for the first row for each table. Sort them all by the order id, tables before columns. Then you should be able to just copy the script column, check the commands, and go.

mysql master-slave partitioned table doesn't exists

I use create Raw Data Files for mysql-master-slave replication,after setup,It's return table xxx doesn't exists when query on the partitioned tables,but it's work ok on the other tables.
And,When I change to use mysqldump, It's all work ok.
Can anyone help me to fix this problem?
If the partition table did not work but the other tables did and the mysqldump worked fine, my best guess would be that your Partitioned data is not stored in the same place as the rest of your data. Thus, when you used the tar, zip, or rsync method to copy your data directory, you left out the data that made up the partitioned table. You would need to locate where the partitioned data is stored and moved that over along with the rest of the data directory.
Based on your comment below, however, you have what is called the famous Schrodinger table. Based on Schrodinger's Cat paradox, This is where Mysql thinks that the table exists, because it shows up when you run show tables, but does not allow you to query of it; as in it exist but does not exist.
Usually this is as a result of not copy over the metadata (as in the ibdata1 file, and the ib_logfiles) correctly. One thing that you can do to test this is, if possible, remove the partition from the tables and try your rsync again. If you are still getting this error, it has nothing to do with the fact that the table is partitioned. Then, this test would lead me to believe that you did not copy all the data over correctly.

Automated Data Import Stored Procedure From an Excel File

I have this Excel file:
Based on this data, I want to create a stored procedure that will identify the correct meter, if it exists, and perform either an insert or update to the monthly data.
Here is the MonthlyData table:
I really have no idea where to get started on this. Sorry about the tables, I am new here and I cannot post pictures yet. Please copy the tables and paste it in Excel.
Thank you
It's probably easiest to create an SSIS package for this if you're going to do this repeatedly.
First, create two tables:
myDataRaw
myDataCleaned
With myDataRaw, you truncate the table and then upload the Excel file into that table using a data upload object.
Create the stored procedure to work with the raw data. I would truncate the myDataCleaned table and then do a INSERT ... SELECT to it, making the WHERE clause specific to finding the account meters that you're looking for. If there are a lot, you can create another table to hold the specific account meters you want to import and use it in your WHERE clause.
I hope that helps get you started.
Have you considered using MERGE Query? I have no idea what 'meter' in this context mean, but if its something that can be checked in database itself, then MERGE query will be the best solution to your problem.
http://www.jooq.org/doc/2.6/manual/sql-building/sql-statements/merge-statement/