Intro
I've searched all around about this problem, but I didn't really found a source of knowledge about this, so I'm sorry if this problem seems basic to you, but for me is rather quite intriguing due the fact that I'm having hard time to guess what keywords to use on google in order to retrieve proper info.
Problem Description :
As a matter of fact, i have to issues that i don't know how to deal in a MySQL instance installed in a laptop in a windows environment:
I have a DB in MySQL with 50 tables, of with 15 or 20 tables are tables with original data. The other tables were tables that i generated from the original data tables, in order to properly create tables that would allow me to analyze data in PowerBI. The original data tables are fed by dumps from a ERP Database.
My issue is the following:
How would one automate the process of receiving cumulative txt/csv files (via pen-drive or any other transfer mechanism), store those files into a folder and then update the existing tables with the new information? Is there any reference of best practices to deal with such a scenario?
How can i maintain the good shape of my database with the successive data integration, I mean, how can I make my database scalable and responsive?
Can you point me some sources that would help me with this?
At the moment I imported data into tables, in 2 steps:
1st - I created the table structure with the Workbench import wizard help ( I had to do it this way because the tables have a lot of fields - dozens of them, literally, and those fields need to be in the database). I also inserted primary keys and indexes in those tables;
2nd - I Managed to load the data from the files into those tables, using LOAD DATA IN FILE command.
Some of the fields of the tables created with the import wizard, were created as data type text, with is not necessary in this scenario. I would like to revert those fields to data type NVARCHAR(255) or something, However there are a lot of field to alter the data type and in multiple tables at this point, and i was wondering if i can write a query to do the job of creating all the ALTER TABLES statements i need.
So my issue here is: is it safe to alter the data type in multiple fields in multiple columns (in this case i would like to change fields with datatype text to NAVARCHAR(255))? What is the best way to do this? Can you point me to some sources or best practices for this, please?
Thank you, in advance, for your help.
Cheers
You need a scripting language, not a UI. See mysql commandline tool, the shell of your OS, etc, etc.
DROP DATABASE and reCREATE it
LOAD DATA
Massage the data to get the columns cleaner than what the load data provided
Sic the BI tool on the data.
If you want to discuss Step 3, we need details about what transformations are needed between step 2 and step 4. That includes providing the format or schema for steps 2 and 4.
Related
I'm migrating a large(approx. 10GB) MySQL database(InnoDB engine).
I've figured out the migration part. Export -> mysqldump, Import -> mysql.
However, I'm trying to figure out the optimum way to validate if the migrated data is correct. I thought of the following approaches but they don't completely work for me.
One approach could have been using CHECKSUM TABLE. However, I can't use it since the target database would have data continuously written to it(from other sources) even during migration.
Another approach could have been using the combination of MD5(), GROUP_CONCAT, and CONCAT. However, that also won't work for me as some of the columns contain large JSON data.
So, what would be the best way to validate that the migrated data is correct?
Thanks.
How about this?
Do SELECT ... INTO OUTFILE from each old and new table, writing them into .csv files. Then run diff(1) between the files, eyeball the results, and convince yourself that the new tables' rows are an appropriate superset of the old tables'.
These flat files are modest in size compared to a whole database and diff is fast enough to be practical.
I need to migrate the exceeding database value with new one. I have two database like test and test new. I create the both database with same data. I made the all changes in test now I need migrate that changes in test new without affecting existing value.
If table schema is different, how will I then go about doing this? In my prev job, what I did was import data (in my case, from Access) into my destination (MySQL) leaving table structures, then use SQL to select data and manipulate as required into final destination tables.
in my case, where I don't have documentation for the old database, and the columns was not named correctly, e.g. it uses say 'field1', 'field2' etc. I needed to trace from the application code what the columns mean. Is there any better way? Also, sometimes columns contain multiple values in delimited data, is reading code the only way?
It sounds like you know what to do, but are just not keen to do it.
If there is no documentation then it makes sense that you will have to go to the code to figure out what it does. Regarding porting it across you will most likely have to write custom scripts that pull the data, manipulate it and insert it into the new table based on the new structure.
There are some tools to generate migration scripts - i.e. scripts that generate inserts for all your data. I think mysql workbench does it, but it most likely won't be sufficient since your tables have different structures.
We are handling a data aggregation project by having several microsoft sql server databases combining to one mysql database. all mssql database have the same schema.
The requirements are :
each mssql database can be imported to mysql independently
before being able to import each record to mysql we need to validates each records with a specific createrias via php.
each imported mssql database can be rollbacked. It means even it already imported to mysql, all the mssql database can be removed from the mysql.
we would still like to know where does each record imported to the mysql come from what mssql database.
All import process will be done with PHP .
we have difficulty in many aspects. we don't know what is the best approach to solve our problem.
your help will be highly appreciated.
ps: each mssql database has around 60 tables and each table can have a few hundred thousands .
Don't use PHP as a database administration utility. Any time you build a quick PHP script to transfer records directly from one database to another, you're going to cause yourself a world of hurt when that script becomes required for production operation.
You have a number of problems that you need solved:
You have multiple MSSQL databases with similar if not identical tables.
You have a single MySQL database that you want to merge the data into.
The imported data must be altered in a specific way before being merged.
You want to prevent all duplicate records in your import.
You want to know what database each record originally came from.
The solution?
Analyze the source MSSQL databases and create a merge strategy for them.
Create a database structure on the MySQL database that fits the merge strategy in #1, including all the new key constraints (like unique and foreign keys) required for the consolidation.
At this point you have two options left:
Dump the data from each of the source databases into raw data using your RDBMS administration utility of choice. Alter that data to fit your merge strategy and constraints. Document this, and then merge all of the data into your new database structure.
Use a tool like opendbcopy to map columns from one database to another and run a mass import.
Hope this helps.
I want to store messages received from a server in some kind of database. My web site is based on Drupal and i am using Mysql database. The messages contain 6 field ( 2 strings and 4 integers). Upon storage these messages would be constantly queried for and also updated.
Which approach should i go for here. A drupal CCK field or a MySql database table?
It depends. CCK's schema was designed more for flexibility than for being "constantly queried." If you don't expect your schema to change much over time and you have the ability to create and manage your own tables, you might not get much benefit from CCK. If you ever decide to export your data to a system other than Drupal, it would probably be easier from the simple table structure that you'd get from creating your own custom tables.
However, if you're in a hurry for some reason, you can probably create a single content type with the fields you describe in about 15 minutes. If you want support for the Views module, that's already included with CCK.
If you have legacy data in a database, or have your database filled trough some external process, CCK is not an option.
CCK allows you to modify fields in a web-interface; but, in doing so, you change the underlying database structure.
CCK also depends on its own database structure, there is no way you can make CCK grab content from a legacy or none-CCK database, or to push content in there, without writing all the CCK fields yourself.
I need to convert data that already exists in a MySQL database, to a SQL Server database.
The caveat here is that the old database was poorly designed, but the new one is in a proper 3N form. Does any one have any tips on how to go about doing this? I have SSMS 2005.
Can I use this to connect to the MySQL DB and create a DTS? Or do I need to use SSIS?
Do I need to script out the MySQL DB and alter every statement to "insert" into the SQL Server DB?
Has anyone gone through this before? Please HELP!!!
See this link. The idea is to add your MySQL database as a linked server in SQL Server via the MySQL ODBC driver. Then you can perform any operations you like on the MySQL database via SSMS, including copying data into SQL Server.
Congrats on moving up in the RDBMS world!
SSIS is designed to do this kind of thing. The first step is to map out manually where each piece of data will go in the new structure. So your old table had four fields, in your new structure fileds1 and 2 go to table a and field three and four go to table b, but you also need to have the autogenerated id from table a. Make notes as to where data types have changed and you may need to make adjustments or where you have required fileds where the data was not required before etc.
What I usually do is create staging tables. Put the data in the denormalized form in one staging table and then move to normalized staging tables and do the clean up there and add the new ids as soon as you have them to the staging tables. One thing you will need to do if you are moving from a denormalized database to a normalized one is that you will need to eliminate the duplicates from the parent tables before inserting them into the actual production tables. You may also need to do dataclean up as there may be required fileds in the new structure that were not required in the old or data converstion issues becasue of moving to better datatypes (for instance if you stored dates in the old database in varchar fields but properly move to datetime in the new db, you may have some records which don't have valid dates.
ANother issue you need to think about is how you will convert from the old record ids to the new ones.
This is not a an easy task, but it is doable if you take your time and work methodically. Now is not the time to try shortcuts.
What you need is an ETL (extract, transform, load) tool.
http://en.wikipedia.org/wiki/Extract,_transform,_load#Tools
I don't really know how far an 'ETL' tool will get you depending on the original and new database designs. In my career I've had to do more than a few data migrations and we usually always had to design a special utility which would update a fresh database with records from the old database, and yes we coded it complete with all the update/insert statements that would transform data.
I don't know how many tables your database has, but if they are not too many then you could consider going the grunt root. That's one technique that's guaranteed to work after all.
If you go to your database in SSMS and right-click, under tasks should be an option for "Import Data". You can try to use that. It's basically just a wizard that creates an SSIS package for you, which it can then either run for you automatically or which you can save and then alter as needed.
The big issue is how you need to transform the data. This goes into a lot of specifics which you don't include (and which are probably too numerous for you to include here anyway).
I'm certain that SSIS can handle whatever transformations you need to do to change it from the old format to the new. An alternative though would be to just import the tables into MS SQL as-is into staging tables, then use SQL code to transform the data into the 3NF tables. It's all a matter of what your most comfortable with. If you go the second route, then the import process that I mentioned above in SSMS could be used. It will even create the destination tables for you. Just be sure that you give them unique names, maybe prefixing them with "STG_" or something.
Davud mentioned linked servers. That's definitely another way that you can go (and got my upvote). Personally, I prefer to copy the tables over into MS SQL first since linked servers can sometimes have weirdness, especially when it comes to data types not mapping between different providers. Having the tables all in MS SQL will also probably be a bit faster and saves time if you have to rerun or correct portions of the data. As I said though, the linked server method would probably be fine too.
I have done this going the other direction and SSIS works fine, although I might have needed to use a script task to deal with slight data type weirdness. SSIS does ETL.