Is MySQL trigger the way to go? - mysql

I'm creating a mysql db in which to store files of different formats and sizes like pdfs, imgs, zips and whatnot.
So I started looking for examples on the blob data type (which I think is the right data type for storing the above mentioned files) and I stumbled upon this SOquestion. Essentially what the answer suggests is not to store the blob files directly into the "main" table but create two different tables, one for the file description and the other for the blobs themselves (as these can be heavy to get). And connect these tables by a foreign key constraint to tie the file to its description and do a join operation to retrieve the wanted blob if needed.
So I've created the following tables:
create table if not exists file_description(
id int auto_increment primary key,
description_ varchar(200) not null,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
) engine=INNODB;
create table if not exists files(
id int auto_increment primary key,
content longblob not null,
format_extension varchar(10) not null,
foreign key (id) references file_description(id)
on update cascade
on delete cascade
)engine=INNODB;
But how can I enforce that after each insertion into the file_description table directly follows an insertion into the files table?
I'm no expert but for what I've seen on triggers they are used in a different way than what I would like to do here. Something like
create trigger whatever
on file_description after insert
...
I don't know, how do I do that?

You cannot enforce through database tools that an insertion into a parent table is followed by an insertion into a child table as the data to be inserted come from outside of the database. You need to design your application in a way that it populates both tables right after each other.
What the application can do is to encapsulate the two insert statements into a single transaction ensuring that either both inserts succeed or both are rolled back leaving your database in a consistent state.

Related

How to execute a MySQL bulk insert with foreign keys and set to NULL if invalid

I'm importing several hundred thousands of rows coming from a service's xml data that has foreign keys pointing from screenings.venue_id to the venues table.
However I found out that some of their data has missing venues so bulk insert fails.
Is there a way to automatically set screenings.venue_id to NULL if the foreign key fails? I would really like to keep the foreign key for data cohesion and cascade update/delete without using triggers.
So far my best idea is to disable the foreign key checks before insert. This would work for initial import however this means that every time a screening is saved and has an invalid foreign key, it would fail.
For Subsequent inserts I would try this way..
Create a separate landing table which is doesn't have FK relation.
Create a Before insert Trigger on that landing table which verifies whether the Venue_ID exists on Venue Master table. If it exists then insert will continue if not you can skip the insert or insert those bad rows in another log table for future reference.
Hope this helps

Migrate SQLIte Data in to MySQL and manage/update the foreign keys?

I'm developing an Android application in which the data is stored in a SQLite database.
I have made sync with a MySQL database, in the web, to where I'm sending the data stored in the SQLite in the device.
The problem is that I don't know how to maintain the relations between tables, because the primary keys are going to be updated with AUTO_INCREMENT, and the foreign keys remain the same, breaking the relations between tables.
If this is a full migration, don't use auto increment during migration - create tables with normal columns. Use ALTER TABLE to change the model after import.
For incremental sync, the easiest way I see is additional column in each MySQL table called sqlite_id and filled with original id. Then you can update references using UPDATE (with joins).
Alternatives involve temporary tables for storing data and an auxiliary table used for pairing. Tedious for bigger data model.
The approach I tend to use, if possible, is to avoid auto increment in such situations. I have usaully an auxiliary table with four columns like this: t_import(tablename, operationid, sqlite_id, mysqlid).
Process is the following:
Import the primary keys into t_import. Use operationid to separate parallel imports if needed.
Generate new keys for data tables and store them into t_import table. This can be combined with step one.
Import the actual data and use t_import for setting new primary keys and restore relations.
That should work for most scenarios I know about.
Thanks or the help, you have given me some ideas.
I will try to add a id2 field to the tables that will store the same value as the primary key (_id)
When I send the information from SQLite to MySQL and the primary key is incremented I will have the id2 field with the original value of the primary key so that I can compare it with the foreign key of the other tables and update it.
Let’s see if it works.
Thanks

Opposite of RESTRICT in MySQL Foreign Key On Delete?

I'm in the process of redesigning some application security log tables (things like when users log in, access different files, etc.) to address some changing requirements. They were originally made with MyISAM, but don't really get accessed that often and switching to InnoDB and adding a bunch of foreign keys for data integrity would really be more beneficial. Since I have to remake the tables anyway, I figure this is as good a time as ever to make the switch.
For the most part, everything is straightforward foreign keys and works as expected. The only part that where I'm trying something weird and hitting problems is with user_ids. Each record in these log tables is associated with a user_id, and I want to make sure the given user_id exists when a record is inserted. Adding a foreign key that references the user table solves that problem - simple stuff. Here are some concise, representative tables:
The User Table
CREATE TABLE tbl_user (
id INT(10) NOT NULL AUTO_INCREMENT,
first_name VARCHAR(50),
PRIMARY KEY(id)
) ENGINE=InnoDB;
Example Log Table
CREATE TABLE tbl_login_time (
id INT(10) NOT NULL AUTO_INCREMENT,
user_id INT(10) NOT NULL,
login_at TIMESTAMP NOT NULL,
PRIMARY KEY(id),
CONSTRAINT 'tbl_login_time_fk_1` FOREIGN KEY (user_id) REFERENCES tbl_user
ON UPDATE CASCADE ON DELETE ???
) ENGINE=InnoDB;
My problem is that I want the foreign key enforced for inserts, updates to be cascaded, but deleting records in tbl_user to not affect tbl_login_time at all. Normally users get marked as inactive, but every once in awhile a user gets deleted entirely yet the logs need to be maintained.
The MySQL docs lists 6 options for ON DELETE, and none of them sound appropriate:
RESTRICT: Would prevent the deletion in tbl_user.
NO ACTION: Gets evaluated just like RESTRICT.
CASCADE: Would delete in tbl_user like I want, but also in tbl_login_time.
SET NULL: Would delete in tbl_user, and leave the row in tbl_login_time but nulls out the data. Close but no cigar.
SET DEFAULT: MySQL recognizes it, but rejects it.
Omit ON DELETE: Equivalent to RESTRICT.
I've never used a foreign key like this before (enforce INSERT and UPDATE but not DELETE), and after reading a lot of other questions it doesn't seem like anyone else does either. That should probably tell me this is the wrong approach, but can it work somehow?
My problem is that I want the foreign key enforced for inserts,
updates to be cascaded, but deleting records in tbl_user to not affect
tbl_login_time at all.
You can't accomplish that with a foreign key constraint.
In some applications, ON DELETE SET NULL makes sense. But your application is essentially a log file stored in a SQL database. You have a significant problem in that you want to delete identifying information (users), but retain their ID numbers in some cases. I frankly don't understand why you're willing to retain the fact that user 23332 logged in at 18:40 today, while not caring whether you can identify who user 23332 is.
You have a few options.
Drop the logfile table, and store logfile data in a file in the filesystem, not in the database. If I were in your shoes, I'd consider this first. If we're talking about a database that's somehow accessible over the web, make sure the log file is stored outside the web root. (I'd store it under /var/log with all the other log files.)
Use foreign key constraints, and never delete a user.
Use foreign key constraints, and live with the effect of ON DELETE SET NULL or ON DELETE SET DEFAULT. In this particular application ON DELETE SET NULL and ON DELETE SET DEFAULT are semantically equivalent. Both replace good data with data that doesn't identify the user. If you can't identify user 23332 anyway, who cares whether you know she logged in at 18:40 today?
Drop the foreign key constraints, and use triggers to do whatever you like.
I'm pretty sure we agree that the most obvious option--use foreign keys with ON DELETE CASCADE--is simply wrong for your application.

Changing a MySQL database retrospectively

Is there a method to track changes to a MySQL database? I develop offline and then commit all the changes to my server. For the app itself I use Git and it works nicely.
However, for the database, I'm changing everything manually because the live database contains customer data and I cannot just replace it with the development database.
Is there a way to only have the structural changes applied without completely replacing one db with another?
The term you're looking for is 'database migrations' (and no, it doesn't refer to moving from one RDBMS to another). Migrations are a way to programatically version control your database structure. Most languages have some kind of migrations toolkit, often as part of an ORM library/framework.
For PHP you can look at Doctrine
For Ruby it's Rails of course
The key to have keep track of your changes is Snapshots my friend.
Now, it's a wide field. The first thing you have to do is decide if you want to keep track of your database with some kind of data in it. If that's the case you have several options, from using LVM, copying InnoDB binary logs, and the simple mysqldump.
Now, if what you wanna do is have some smooth transition between your database changes (i mean, you added a column, for example), you have some other options.
The first one is replication. That's a great option, but is a little complex. With replication you may alter one slave and after it's done, with some locking, you can make it master, and replace master, and so on. It's really difficult, but is the better option.
If you cannot afford replication, what you must do is apply the changes to your single-master DB with the minimum downtime. Some good option is this:
Suppose you want to replace your Customer table to add a "facebook_account" field. First, you can use an alias table, like this:
The original table (it has data):
CREATE TABLE `customer` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`name` varchar(255) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB;
The new one:
CREATE TABLE `new_customer` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`name` varchar(255) NOT NULL,
`facebook_account` varchar(255) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB;
Or simply:
CREATE TABLE new_customer LIKE customer;
ALTER TABLE new_customer add column facebook_account VARCHAR(255);
Now we're gonna copy the data to the new table. We'll need to issue some other things first, i'll explain them each at a time.
First, you can allow other connections to modify the customer table while your making the change of table, so i'll issue a lock. If you want to learn more about this go here:
LOCK TABLES customer WRITE ,new_customer WRITE;
Now i flush the table to write any cache content to the filesystem:
FLUSH TABLES customer;
Now we can do the insert. First I disable the keys for performance issues. After the data is inserted i enable the keys again.
ALTER TABLE new_customer DISABLE KEYS;
INSERT INTO new_customer(id,name,facebook_account) SELECT customer.id,customer.name, Null FROM customer;
ALTER TABLE new_customer ENABLE KEYS;
Now we can switch the tables.
ALTER TABLE customer RENAME old_customer;
ALTER TABLE new_customer RENAME customer;
Finally we have to release the lock.
UNLOCK TABLES;
That's it. If you want to keep track of your modified tables you may want to rename your old_customer table, to something else or move it to other database.
The only issue i didn't cover here is about Triggers. You have to pay atention to any enabled trigger, but it will depend on your schema.
That's it, hope it helps.

mysql, how to create table and automatically track users who add or delete rows/tables

I would like some kind of revision control history for my sql database.
I would like this table to keep updating with a record of who, deleted what, etc, when.
I am connecting to MySQL using Perl.
Approach 1: Create a separate "audit" table and use triggers to populate the info.
Here's a brief guide for MySQL (and Postrges): http://www.go4expert.com/forums/showthread.php?t=7252
Approach 2: Populate the audit info from your Perl database access code. Ideally, as part of the same transaction. There's no significant win over the first approach and many downsides (you don't catch changes made OUTSIDE of your code, for one)
**Disclaimer: I faced this situation in the past, but in PHP. Concepts are for PHP but could be applied to perl with some thought.
I played with the idea of adding triggers to each table AFTER INSERT, AFTER UPDATE, AFTER DELETE
to accomplish the same thing. The problem with this was:
the trigger didn't know the 'admin' user, just the db user (CURRENT_USER)
Biggest issue was that it wasn't feasible to add these triggers to all my tables (I suppose I could have written a script to add the triggers).
Maintainability of the triggers. If you change how things are tracked, you'd have to update all triggers. I suppose having the trigger call a stored procedure would mostly fix that issue.
Either way, for my situation, I found the best course of action was in the application layer (not DB layer):
create a DB abstraction layer if you haven't already (Class that handles all the interaction with the database).
create function for each action (insert, update, delete).
in each of these functions, after a successful query call, add another query that would insert the relevant information to your tracking table
If done properly, any action you perform to update any table will be tracked. I had to add some overrides for specific tables to not track (what's the point of tracking inserts on the 'track_table' table, for instance). Here's an example table tracking schema:
CREATE TABLE `track_table` (
`id` int(16) unsigned NOT NULL,
`userID` smallint(16) unsigned NOT NULL,
`tableName` varchar(255) NOT NULL DEFAULT '',
`tupleID` int(16) unsigned NOT NULL,
`date_insert` datetime NOT NULL,
`action` char(12) NOT NULL DEFAULT '',
PRIMARY KEY (`id`),
KEY `userID` (`userID`),
KEY `tableID` (`tableName`,`tupleID`,`date_insert`)
) ENGINE=InnoDB