Export only the indexes from mysql database - mysql

I have a mysql database. For which I need to retrieve only the indexes present in my entire database, Is there any way I could get it? OR get a
CREATE indexes script which will help me do so.
I just don't want to SELECT them, rather I would like to get a script which when executed on similar database - the indexes will be added

Related

Get MySQL database structure alter queries

I'm working on a version control program, and I would like to implement database structure versioning as well.
Is there a way to get a list of all the queries that have altered the databse structure in any way?
For example I added a column to the 'users' table called 'remember_token'. Is there a way I can get the specific query that was executed on the MySQL server in order to add that column?
You may want to enable the mysql query log and then filter on ALTER queries or anything you need

What is the most efficient way to create new MySQL database

I need to setup a development environment for several developers. Each of them needs to test the software with a "fresh" MySQL database. There is a SQL file with many CREATE, ALTER and INSERT queries.
Currently there is a PHP script with mysqli::multi_query that creates a new database and performs all queries from the SQL file. It is called each time when some developer needs a fresh instance of database. But it takes too much time to execute all needed queries.
I tried to change script to execute mysql < my_pre_mysqldumped_file.sql, but it is almost same slow.
Also, I tried to have an "initial" database and copy each table with CREATE TABLE ... LIKE ..., but it does not copy foreign keys.
So, the question: what is the fastest way from server performance point of view to create a new one or copy existing MySQL database?
Based on my investigation in the internet, I suppose that there is no efficient way to do it. I also have asked this question at https://dba.stackexchange.com/questions/51257/what-is-the-most-efficient-way-to-create-new-mysql-database. Guido's suggestion to have a stack of pre-generated databases seems to be the most relevant.

Exporting table data without the schema?

I've tried searching for this but so far I'm only finding results for "exporting the table schema without data," which is exactly the opposite of what I want to do. Is there a way to export data from a SQL table without having the script recreate the table?
Here's the problem I'm trying to solve if someone has a better solution: I have two databases, each on different servers; I'll call them the raw database and the analytics database. The raw database is the "real" database and collects records sent to its server, and stores them in a table using the transactional InnoDB engine. The analytics database is on an internal LAN, and is meant to mirror the raw database and will periodicly be updated so that it matches the raw database. It's separated like this because we have a program that will do some analysis and processing of the data, and we don't want to do it on the live server.
Because the analytics database is just a copy, it doesn't need to be transactional, and I'd like it to use the MyISAM engine for its table because I've found it to be much faster to import data into and query against. The problem is that when I export the table from the live raw database, the table schema gets exported too and the table engine is set to InnoDB, so when I run the script to import the data into the analytics database, it drops the MyISAM table and recreates it as an InnoDB table. I'd like to automate this process of exporting/importing data, but this problem of the generated sql script file changing the table engine from MyISAM to InnoDB is stopping me, and I don't know how to get around it. The only way I know is to write a program that has direct access to the live raw database, do a query, and update the analytics database with the results, but I'm looking for alternatives to this.
Like this?
mysqldump --no-create-info ...
Use the no-create-info option
mysqldump --no-create-info db [table]

MYSQL - Querying multiple wordpress databases at once ?

I need to execute an cleaning operation on my mysql databases,
I have 140 wordpress databases under the same connection.
I have ~30 forbidden words.
I need to query all wp-post tables post-content columns find these 30 words,
and remove the rows that include one of these words.
I must do it on each databases at once!
You could write a small program in java or c# to actually loop through all the databases.
If you execute
show databases; this will retrieve all the databases that are present on you connection. From there I assume you know the tables on which you want to query for your forbidden words. Then you can loop within this new application for each database present and you will be able to query the wanted table.
Let me know if this is what you were expecting. If you want some code sample let me know.

Setting up a master database to control the structure of other databases

I got a case where I have several databases running on the same server. There's one database for each client (company1, company2 etc). The structure of each of these databases should be identical with the same tables etc, but the data contained in each db will be different.
What I want to do is keep a master db that will contain no data, but manage the structure of all the other databases, meaning if I add, remove or alter any tables in the master db the changes will also be mirrored out to the other databases.
Example: If a table named Table1 is created in the master DB, the other databases (company1, company2 etc) will also get a table1.
Currently it is done by a script that monitors the database logs for changes made to the master database and running the same queries on each of the other databases. Looked into database replication, but from what I understand this will also bring along the data from the master database, which is not an option in this case.
Can I use some kind of logic against database schemas to do it?
So basicly what I'm asking here is:
How do I make this sync happen in the best possible way? Should I use a script monitoring the logs for changes or some other method?
How do I avoid existing data getting corrupted if a table is altered? (data getting removed if a table is dropped is okay)
Is syncing from a master database considered a good way to do what I wish (having an easy maintainable structure across several datbases)?
How will making updates like this affect the performance of the databases?
Hope my question was clear and that this is not a duplicate of some other thread. If more information and/or a better explantion of my problem is needed, let me know:)
You can get the list of tables for a given schema using:
select TABLE_NAME from information_schema.tables where TABLE_SCHEMA='<master table name>';
Use this list for a script or stored procedure ala:
create database if not exists <name>;
use <name>;
for each ( table_name in list )
create table if not exists <name>.table_name like <master_table>.table_name;
Now that Im thinking about it you might be able to put a trigger on the 'information_schema.tables' db that would call the 'create/maintain' script. Look for inserts and react accordingly.