Clean mysqldump for tracking schema in git - mysql

I want to write a set of small shell scripts to help me keep MySQL schema in git. But one crux is that mysqldump -uuser -ppass -d devdb > schema.sql includes two things that change over time even if there are no schema changes:
AUTO_INCREMENT=[some number] at the end of the definition of any table that has an auto-incremented column
-- Dump completed on [date and time] at the end
I have scoured the web for ways to get a dump without those things, to no avail. Can you advise? Or should I be using a different command or tool to get a clean schema for use in version control?
EDIT: I just now found the --skip-dump-date option, so that solves point #2, but I still can't get rid of the auto-increment number without losing the other table attributes (or whatever you call those things) like the engine and the default character set.

There is no way to bypass #1, check https://bugs.mysql.com/bug.php?id=20786.
As noted in the comments section, you can use that sed command to filter it out (unless you have some CREATE TABLE statements that use it).

Related

How to restore dump without drop table before in MySql

I have a dump of a part of a table from specific date and I would like to restore this dump in a replica database in the specific table, but when I try to restore it, the mysql gives me an error: The table is already exist.
In case it helps, the way I do the dump is the next:
mysqldump --user=root my_db my_table --where="YEAR(created)='2021' AND MONTH(created)='21'" > week21.sql
I know that I can create the dump with --optoption, but this option drop first the whole table, so I would lose the current data in this table right?
Any Idea to do that?
Thanks
mysqldump (or mariadb-dump) emits a mess of SQL statements into its output file. You can read those statements by looking at the file in a text editor. And, you can edit the file if need be (but that's a brittle way to handle a workflow like yours).
You need to get it to write the correct SQL statements for your particular application. In your case the CREATE TABLE statements mess up your workflow, so leave them out.
If you use the command-line option --no-create-info mysqldump won't write CREATE TABLE statements into its output file. So that will solve your immediate problem.
If the rows you attempt to restore with your mysqldump output might already exist in your new table, you can use mysqldump's --insert-ignore command line option to get it to write INSERT IGNORE statements rather than plain INSERT statements.

WARNING: The option --database has been used

Running mysqlbinlog to load up binary logs from one server to another.
Consistently get message:
WARNING: The option --database has been used. It may filter parts of transactions, but will include the GTIDs in any case.
Yah -- OK? So??
Well maybe this is a stupid question, but how am I supposed to distinguish the "GTID" of the database I want from the "GTID" of the database I don't want? In other words, how do I specify transactions to a particular database while shutting off this annoying warning?
Tried adding "--include-gtids" parameter, but I think it wants a list of GTIDs. I don't have a list of GTIDs. I have a database. Why is that complicated?
It's complicated because --database doesn't mean what you probably think it means.
It does NOT mean only include changes to the named database.
It means:
This option causes mysqlbinlog to output entries from the binary log (local log only) that occur while db_name is been selected as the default database by USE.
For example:
USE db1;
INSERT INTO db2.mytable ...
This will NOT be included if you use --database db2, because db2 wasn't the default database when this transaction was written to the binary log.
Another example:
USE db3;
UPDATE db1.mytable JOIN db2.myothertable
SET db1.col1 = ...,
db2.col2 = ...;
Should the changes to db1.mytable resulting from this be included if you use --database db2?
Trick question: the changes to neither table will be included, because the current default database was db3.

Replace BLOBs with other BLOB in MySQL

I have the database in MySQL. I need the dump of this database in another environment, but this dump needs to be a little bit different from the original one. What I would like to achieve is to replace 3 columns in one of my tables with "fake default" values.
The problem is that one of the columns is the type of longblob and I have no idea, what I could do with it. With other situation which is quite similar to this, I saw an example where bash script is running mysqldump command and then cat the dump and perform in it grep and sed in order to modify the dump. I was trying to find any information if it is possible with command mysqldump, but I didn't find anything useful.
What I was thinking about is maybe I could create a bash script/cronjob, which will create a new table almost same as the existing one, but the difference would be desirable values instead of the original one. Then it would create a dump of the whole database and finally with grep and sed rename the newly created table with the name of the original one, which would be removed.
I don't know if there is any other way to do this. The perfect solution would be just performing an UPDATE operation on the fly of creation mysqldump.
So my question is if there is an easy way to achieve this? Or maybe is something that I missed?

How can I edit a view in MySQL Workbench without it auto prefixing a database name to the tables/views used

When I create a view, I create it in the context of the default database. So none of my references to table have a prefix which explicitly specify a database. However, when I edit a view in Workbench it automatically adds the database prefix!
I don't want the database prefix because when I restore a database under a different name it causes the restore to fail.
Is this possible to stop the prefixing in a view edit or there another way to get round the restore issue?
see https://bugs.mysql.com/bug.php?id=85176
The mysql 8.0.3 or above has been fixed
That's not possible. Views are stored in specific databases, not in some space "above" all databases. Consider following...
use playground_a; /*or whatever database*/
create view view_whatever as
select * from table_whatever;
use playground_b;
select * from view_whatever; /*here you will get an error that view_whatever does not exist*/
select * from playground_a.view_whatever; /*this works*/
That's why there will always be database prefixes in the view definition.
The only possibility I see, would be to use a stored procedure with a database name as parameter. In the procedure you'd use a prepared statement to execute a concated string of your query and the database name parameter. Of course this comes with downsides, like i.e. you can't add a where clause easily.
Creating the view without explicitely specifying a schema is a convenience feature. Behind the scenes the view is still saved in a specific schema (the default one in this case). When editing the source code is retrieved from the server which returns the real code (including the schema qualification). Hence already when you send the view code the association happens and cannot be removed again later.
Here is the command I use to create the backup:
mysqldump -u xxxxxx -pxxxxxx --routines database_a | gzip -9 > $FULLGZIPPATH
If you aren't easily able to update to MySQL 8.X then a workaround I've implemented was a post-processing step performed on the dump file itself prior to importing. I just remove the explicit prefixed db name, since the import process / view creation doesn't need it.
PowerShell -Command ^
"filter replace-dbname { $_ -replace '`<DB_NAME>`.`', '`' }"^
"Get-Content dump.sql -ReadCount 10 | replace-dbname | Add-Content replaced_dump.sql"
I've used PowerShell since I'm on Windows, but any scripting language will do. The only notes are that:
You'll need to do the replacement a-few-lines-at-a-time if you can't afford to read the entire dump into memory. Our dumps are about 11GB, which'd be a strain on our testing server's resources.
In my script I'm not doing an in-place string replacement, so it'll create a new dump file replaced_dump.sql alongside the original dump.sql. For me this was useful for diagnostics, because it meant if there was an issue, I didn't have to dump it again. Again, depending on your dump/disk size this might be an issue.
If your database happens to have `<DB_NAME>`.` as content in something like a text-field, this basic approach will also remove the string there as well.

mysql --ignore-table - concise multiple

Is there a concise way to ignore multiple tables using mysql --ignore-table?
The documentation says that --ignore-table must be used for each table.
I have to ignore around 20 tables so the command is going to be huge
This must be run via the command line
Go the whitelist route instead of the blacklist route. From the man page:
mysqldump [options] [db_name [tbl_name ...]]
You still have to type everything out, but at least you don't have to continuously type the --ignore-table flag.
Try using MySQL Workbench to build the command. It'll give the option to specify which tables you want via checkboxes.
If you need to, you can copy and paste the command from the Export Progress tab.