I'm writing a test framework in which I need to capture a MySQL database state (table structure, contents etc.).
I need this to implement a check that the state was not changed after certain operations. (Autoincrement values may be allowed to change, but I think I'll be able to handle this.)
The dump should preferably be in a human-readable format (preferably an SQL code, like mysqldump does).
I wish to limit my test framework to use a MySQL connection only. To capture the state it should not call mysqldump or access filesystem (like copy *.frm files or do SELECT INTO a file, pipes are fine though).
As this would be test-only code, I'm not concerned by the performance. I do need reliable behavior though.
What is the best way to implement the functionality I need?
I guess I should base my code on some of the existing open-source backup tools... Which is the best one to look at?
Update: I'm not specifying the language I write this in (no, that's not PHP), as I don't think I would be able to reuse code as is — my case is rather special (for practical purposes, lets assume MySQL C API). Code would be run on Linux.
Given your requirements, I think you are left with (pseudo-code + SQL)
tables = mysql_fetch "SHOW TABLES"
foreach table in tables
create = mysql_fetch "SHOW CREATE TABLE table"
print create
rows = mysql_fetch "SELECT * FROM table"
foreach row in rows
// or could use VALUES (v1, v2, ...), (v1, v2, ...), .... syntax (maybe preferable for smaller tables)
insert = "INSERT (fiedl1, field2, field2, etc) VALUES (value1, value2, value3, etc)"
print insert
Basically, fetch the list of all tables, then walk each table and generate INSERT statements for each row by hand (most apis have a simple way to fetch the list of column names, otherwise you can fall back to calling DESC TABLE).
SHOW CREATE TABLE is done for you, but I'm fairly certain there's nothing analogous to do SHOW INSERT ROWS.
And of course, instead of printing the dump you could do whatever you want with it.
If you don't want to use command line tools, in other words you want to do it completely within say php or whatever language you are using then why don't you iterate over the tables using SQL itself. for example to check the table structure one simple technique would be to capture a snapsot of the table structure with SHOW CREATE TABLE table_name, store the result and then later make the call again and compare the results.
Have you looked at the source code for mysqldump? I am sure most of what you want would be contained within that.
DC
Unless you build the export yourself, I don't think there is a simple solution to export and verify the data. If you do it table per table, LOAD DATA INFILE and SELECT ... INTO OUTFILE may be helpful.
I find it easier to rebuild the database for every test. At least, I can know the exact state of the data. Of course, it takes more time to run those tests, but it's a good incentive to abstract away the operations and write less tests that depend on the database.
An other alternative I use on some projects where the design does not allow such a good division, using InnoDB or some other transactional database engine works well. As long as you keep track of your transactions, or disable them during the test, you can simply start a transaction in setUp() and rollback in tearDown().
Related
So I'm kind of stumped.
I have a MySql project that involves a database table that is being manipulated and altered by scripts on a regular basis. This isn't so unusual, but I need to automate a script to run (after hours, when changes aren't happening) that would save the result of the following:
SHOW CREATE TABLE [table-name];
This command generates the ready-to-run script that would create the (empty) table in it's current state.
In SqlWorkbench and Navicat it displays the result of this SHOW command in a field in a result set, as if it was the result of a SELECT statement.
Ideally, I want to take into a variable in a procedure, and change the table name; adding a '-mm-dd-yyyy' to end of it, so I could show the day-to-day changes in the table schema on an active server.
However, I can't seem to be able to do that. Unlike a Select result set, I can't use it like that. I can't get it in a variable, or save it to a temporary, or physical table or anything. I even tried to return this as a value in a function, from which I got the error that a function cannot return a result set - which explains why it's displayed like one in the db clients.
I suspect that this is a security thing in MySql? If so, I can totally understand why and see the dangers exposed to a hacker, but this isn't a public-facing box at all, and I have full root/admin access to it. Hopefully somebody has already tackled this problem before.
This is on MySql 8, btw.
[Edit] After my first initial comments, I need to add; I'm not concerned about the data with this question whatsoever, but rather just these schema changes.
What I'd really -like- to do is this:
SELECT `Create Table` FROM ( SHOW CREATE TABLE carts )
But this seems to be mixing apples and oranges, as SHOW and SELECT aren't created equal, although they both seem to return the same sort of object
You cannot do it in the MySQL stored procedure language.
https://dev.mysql.com/doc/refman/8.0/en/show.html says:
Many MySQL APIs (such as PHP) enable you to treat the result returned from a SHOW statement as you would a result set from a SELECT; see Chapter 29, Connectors and APIs, or your API documentation for more information. In addition, you can work in SQL with results from queries on tables in the INFORMATION_SCHEMA database, which you cannot easily do with results from SHOW statements. See Chapter 26, INFORMATION_SCHEMA Tables.
What is absent from this paragraph is any mention of treating the results of SHOW commands like the results of SELECT queries in other contexts. There is no support for setting a variable to the result of a SHOW command, or using INTO, or running SHOW in a subquery.
So you can capture the result returned by a SHOW command in a client programming language (Java, Python, PHP, etc.), and I suggest you do this.
In theory, all the information used by SHOW CREATE TABLE is accessible in the INFORMATION_SCHEMA tables (mostly TABLES and COLUMNS), but formatting a complete CREATE TABLE statement is a non-trivial exercise, and I wouldn't attempt it. For one thing, there are new features in every release of MySQL, e.g. new data types and table options, etc. So even if you could come up with the right query to produce this output, in a couple of years it would be out of date and it would be a thankless code maintenance chore to update it.
The closest solution I can think of, in pure MySQL, is to regularly clone the table structure (no data), like so:
CREATE TABLE backup_20220618 LIKE my_table;
As far as I know, to get your hands on the full explicit CREATE TABLE statement, as a string, would require the use of an external tool like mysqldump which was designed specifically for that purpose.
I have two databases: Sybase and MySQL. I need to export records to MySql when these are inserted in Sybase or export in some scheduled event.
I've tried with output statement but this can not be used in triggers or procedures.
Any suggestion to solve this problem?
(disclaimer, I've done similar things previously, but by no means would I consider the answer below the state of the art - just one possible approach
google around something like 'cross-database replication' or 'cross rdbms replication' to see who's done this before.
).
I would first of all see if you can't score an ETL tool do the job without too much work. There are free open source ones and even things like Microsoft SSIS might work on non-MS databases.
If not, I would split this into different steps.
Find an appropriate Sybase output command that exports a subset of rows from one or more tables. By subset I mean you need to be able to add a WHERE clause, not just do a full table dump.
Use an appropriate MySQL import script/command to load the data gotten out of step #1. You may need to cycle back and forth between the 2 till you have something that works manually.
Write a Sybase trigger to insert lookup keys into a to-export table. You want to store at least the tablename & source Sybase table's keys for each inserted row. Use column names like key1_char, key2_char, not the actual column names, that makes it easier to extend to other source tables as needed. keep trigger processing as light as possible. What about updates btw?
Write a scheduled batch on Sybase side to run step #1 for the rows flagged in #3.
Write a scheduled batch on Mysql to import ,via #2, the results of #4. Or kick it off from #4.
Another approach is to do the #3 flagging bit as needed, but use to drive one scheduled batch that SELECTs data from Sybase and INSERTs it into mysql directly.
You'll have to pick up the data from Sybase's SELECT and bind it manually to the INSERT of mysql. But you probably get finer control over whats going on and you don't have to juggle 2 batches. That's what I think a clever ETL would already be doing on your behalf. Any half clever scripting language like php, python or ruby ought to handle it easily. Especially important if you have things like surrogate/auto-generated keys.
Keep in mind that in both cases you'll have to either delete the to-export rows that you've successfully inserted or flag them as done.
I need to migrate the exceeding database value with new one. I have two database like test and test new. I create the both database with same data. I made the all changes in test now I need migrate that changes in test new without affecting existing value.
If table schema is different, how will I then go about doing this? In my prev job, what I did was import data (in my case, from Access) into my destination (MySQL) leaving table structures, then use SQL to select data and manipulate as required into final destination tables.
in my case, where I don't have documentation for the old database, and the columns was not named correctly, e.g. it uses say 'field1', 'field2' etc. I needed to trace from the application code what the columns mean. Is there any better way? Also, sometimes columns contain multiple values in delimited data, is reading code the only way?
It sounds like you know what to do, but are just not keen to do it.
If there is no documentation then it makes sense that you will have to go to the code to figure out what it does. Regarding porting it across you will most likely have to write custom scripts that pull the data, manipulate it and insert it into the new table based on the new structure.
There are some tools to generate migration scripts - i.e. scripts that generate inserts for all your data. I think mysql workbench does it, but it most likely won't be sufficient since your tables have different structures.
I have a MySQL database that I use only for logging. It consists of several simple look-alike MyISAM tables. There is always one local (i.e. located on the same machine) client that only writes data to db and several remote clients that only read data.
What I need is to insert bulks of data from local client as fast as possible.
I have already tried many approaches to make this faster such as reducing amount of inserts by increasing the length of values list, or using LOAD DATA .. INFILE and some others.
Now it seems to me that I've came to the limitation of parsing values from string to its target data type (doesn't matter if it is done when parsing queries or a text file).
So the question is:
does MySQL provide some means of manipulating data directly for local clients (i.e. not using SQL)? Maybe there is some API that allow inserting data by simply passing a pointer.
Once again. I don't want to optimize SQL code or invoke the same queries in a script as hd1 adviced. What I want is to pass a buffer of data directly to the database engine. This means I don't want to invoke SQL at all. Is it possible?
Use mysql's LOAD DATA command:
Write the data to file in CSV format then execute this OS command:
LOAD DATA INFILE 'somefile.csv' INTO TABLE mytable
For more info, see the documentation
Other than LOAD DATA INFILE, I'm not sure there is any other way to get data into MySQL without using SQL. If you want to avoid parsing multiple times, you should use a client library that supports parameter binding, the query can be parsed and prepared once and executed multiple times with different data.
However, I highly doubt that parsing the query is your bottleneck. Is this a dedicated database server? What kind of hard disks are being used? Are they fast? Does your RAID controller have battery backed RAM? If so, you can optimize disk writes. Why aren't you using InnoDB instead of MyISAM?
With MySQL you can insert multiple tuples with one insert statement. I don't have an example, because I did this several years ago and don't have the source anymore.
Consider as mentioned to use one INSERT with multiple values:
INSERT INTO table_name (col1, col2) VALUES (1, 'A'), (2, 'B'), (3, 'C'), ( ... )
This leads to you only having to connect to your database with one bigger query instead of several smaller. It's easier to take in the entire couch through the door once than running back and forth with all disassembled pieces of the couch, opening the door every time. :)
Apart from that, you can also run LOCK TABLES table_name WRITE before INSERT and UNLOCK TABLES afterwards. That will secure that nothing else is inserted during.
Lock tables
INSERT into foo (foocol1, foocol2) VALUES ('foocol1val1', 'foocol2val1'),('foocol1val2','foocol2val2') and so on should sort you. More information and sample code will be found here. If you have further problems, do leave a comment.
UPDATE
If you don't want to use SQL, then try this shell script to do as many inserts as you want, put it in a file, say insertToDb.sh, and get on with your day/evening:
#!/bin/sh
mysql --user=me --password=foo dbname -h foo.example.com -e "insert into tablename (col1, col2) values ($1, $2);"
Invoke as sh insertToDb.sh col1value col2value. If I've still misunderstood your question, leave another comment.
After making some investigation I found no way of passing data directly to mysql database engine (without parsing it).
My aim was to speed up communication between local client and db server as much as possible. The idea was if client is local then it could use some api functions to pass data to db engine thus not using (i.e. parsing) SQL and values in it. The only closest solution was proposed by bobwienholt (using prepared statement and binding parameters). But LOAD DATA .. INFILE appeared to be a bit faster in my case.
The best way to insert data on MS SQL without using insert into or update queries is just to access MS SQL Interface. Right click on the table name and select "Edit top 200 rows". Then you will be able to add data on the database directly by just typing per cell. For you to enable searching or using select or other sql commands just right click on any of the 200 rows you have selected. Go to pane then select SQL and you can add sql command. Check it out. :D
without using insert statement , use " Sqllite Studio " for inserting data in mysql. It's free and open source so u can download and check.
I have a MySQL database running on a deployment machine which also contains data. Then I have another MySQL database which has evolved in terms of STRUCTURE + DATA for some time. I need a way to merge the changes (ONLY) for both structure and data to the DB in deployment machine without disturbing the existing data. Does anyone know of a tool available which can do this safely. I have had a look at a few comparison tools but I need a tool which can automate the merge operation. Note also that most of the data in the tables is in BINARY so I can't use many file comparison tools. Does any one know of a solution to this?
I doubt you can get around implementing your own diff & merge without paying a lot.
Read the structures on both databases, execute a few alter table [table] add column [foo] statements to update the structure, then port data line by line (SELECT * on old Database, UPDATE [new columns] WHERE [primary key conditions]).
There is no easier way to my knowledge.