I'm looking for a flexible MySQL database schema to save logs.
Currently I'm using this one (simplified for the example).
Bigger version: http://i.stack.imgur.com/b6NC9.png
I can do a select * from log, get the log_type and read the specific table.
If log_type:tag is user, the specific table will be called log_user.
To add logs for an application I will add a tag application into log_type and
create a new table log_application.
To read all logs for a user, I do a select SELECT * FROM log_user INNER JOIN log ON log_user.logid=log.id WHERE userid=123.
This is actually working very well and flexible I would nevertheless be interested if somebody has a better idea for such a database schema.
you do not need 3 tables to store logs. Just use one flat table to achieve the same objective. This will save you from doing a lot of unnecessary coding.
Related
I have access to a remote database, and I would like to dump the schema and data of several views onto my local machine and load this into my local database as tables in a quick and easy way.
I lack the user privileges to run CREATE TABLE AS (SELECT * FROM target_view), otherwise this would be trivial to solve. In other words, I want to retrieve and recreate the "composite" schema of target_view as if it were a table.
I do not want the output of SHOW CREATE VIEW, as this only shows a complex SELECT statement with joins to various tables on remote I have limited ability to access. And a problem I'm seeing in MySQL 8.x is when I run SHOW CREATE TABLE on the view, this command simply acts as an alias of SHOW CREATE VIEW (which is reasonable).
Frustratingly, I can run DESCRIBE and see the schema of these views as they were tables. I really just need to convert this information into a CREATE TABLE statement without actually being able to run CREATE TABLE.
In case it weren't obvious, the key is to avoid manual reconstruction of these views' tabular schemas (as they may change in the future). I also want to avoid the solution of reverse engineering a generic table construction of 20-30 generic VARCHAR or TEXT columns from a CSV dump.
I don't know of any way to display the metadata of a result set in CREATE TABLE syntax.
What I would do given your circumstance is first create on your local MySQL instance the base table and the view, then you can use the CREATE TABLE AS SELECT ... syntax to produce a concrete table to match the metadata of the view result set.
I'm currently in the process of implementing a monitoring system, part of which includes monitoring certain aspects of a MySQL database, such as:
The replication state of the given MySQL instance (sys table)
The number of records in database 1's table x (db1.tableX)
The sum total of a given attribute in another db's table (db2.tableY.column3)
These 3 things can be found using very simple queries:
SELECT viable_candidate FROM sys.gr_member_routing_candidate_status
SELECT COUNT(1) FROM db1.tableX
SELECT SUM(column3) FROM db2.tableY
However, this then requires a user account to be made with at least read access to 3 entire databases / tables.
Is there instead a way to limit access to the results of given queries only? I wondered about making an additional database which is somehow linked to the output of the above 3 queries, and then creating a new user with access only to this database, but I'm not sure what this technology is, or how it would work?
Thanks in advance!
Create a view based on each query and then grant only a select permission to such view.
Example:
CREATE VIEW dbo.view_name AS
SELECT viable_candidate
FROM sys.gr_member_routing_candidate_status
And then
GRANT SELECT ON dbo.view_name TO 'user1'#'localhost'
I am using Laravel 4 to build a site that uses a large number of mysql databases, where each database has multiple tables. The structure and organization of the databases is not within my control.
I need to be able to replicate a table from one database in another database at run time (the source database and destination database (and the specific table within the source database are dependent on choices made by the user).
Duplicating a table is easy to do with mysql:
CREATE TABLE database2.new_table LIKE database1.original_table;
INSERT INTO database2.new_table SELECT * FROM database1.original_table;
but I cannot figure out how to do it with Laravel.
I can easily access each database by creating their own connections ('mysql1' and 'mysql2') but I can't figure out how to construct the statement to use both. The following doesn't work
$success = DB::connection('mysql2')->statement('CREATE TABLE new_table LIKE database1.original_table);
because I am trying to access database1 directly without using the 'mysql1' connection, and Laravel generates an error saying that database1.original_table doesn't exist.
I feel like the solution should be obvious but don't have enough experience with Laravel to figure it out. Any guidance would be greatly appreciated.
I've been asked to build a module for a web application, which will also be used as a stand alone website. Since this is the case, I wanted to use a separate database, and wondered if there was a way of having a table in one database, be a "pointer" in another database.
For example, I have databases db1 and db2
db1 has table users, so I want to have db2.users point to db1.users.
I know I could setup triggers and what not to sync two seperate tables but this sounds cooler :)
EDIT
So in my code I'm using sql such as
select * from users
Now, at the database level, I want "users" to actually be db1.users. Then, if I want to, I can remove the alias/pointer and "select * from users" will point to the users table in the current database. I guess what I'm looking for is a "global alias" type of thing.
Just use it directly from another database?
SELECT ... FROM `db1`.`users` LEFT JOIN `db2`.`something`
The federated storage engine offers something similar to the feature you asked for.
And if your databases are on the same database server, the federated storage enging sounds a bit like an overkill to me. You may want to create a view instead.
Both methods won't be useful if db1 is not available. As Emmerman already points out, you need to store the data in db2 if you want to prepare for the case of db1 being unavailable.
I got a case where I have several databases running on the same server. There's one database for each client (company1, company2 etc). The structure of each of these databases should be identical with the same tables etc, but the data contained in each db will be different.
What I want to do is keep a master db that will contain no data, but manage the structure of all the other databases, meaning if I add, remove or alter any tables in the master db the changes will also be mirrored out to the other databases.
Example: If a table named Table1 is created in the master DB, the other databases (company1, company2 etc) will also get a table1.
Currently it is done by a script that monitors the database logs for changes made to the master database and running the same queries on each of the other databases. Looked into database replication, but from what I understand this will also bring along the data from the master database, which is not an option in this case.
Can I use some kind of logic against database schemas to do it?
So basicly what I'm asking here is:
How do I make this sync happen in the best possible way? Should I use a script monitoring the logs for changes or some other method?
How do I avoid existing data getting corrupted if a table is altered? (data getting removed if a table is dropped is okay)
Is syncing from a master database considered a good way to do what I wish (having an easy maintainable structure across several datbases)?
How will making updates like this affect the performance of the databases?
Hope my question was clear and that this is not a duplicate of some other thread. If more information and/or a better explantion of my problem is needed, let me know:)
You can get the list of tables for a given schema using:
select TABLE_NAME from information_schema.tables where TABLE_SCHEMA='<master table name>';
Use this list for a script or stored procedure ala:
create database if not exists <name>;
use <name>;
for each ( table_name in list )
create table if not exists <name>.table_name like <master_table>.table_name;
Now that Im thinking about it you might be able to put a trigger on the 'information_schema.tables' db that would call the 'create/maintain' script. Look for inserts and react accordingly.