I have two tables one is master table and other is just for cache. From time to time I check whether cache table is up to date and there is no missing data. Cache table is using MyISAM engine and master table is using InnoDB engine.
To explain it in more detail I give you an example
Cache table contains fields from following two tables
product_categories (cat-id, cat_name, parent_cat_id DEFAULT NULL, parent_cat_name DEFAULT NULL)
products (product_num, product_name, product_desc, price, image, product_date, availability)
It may be possible cache table does not contain products or it may contain products data but it may not be accurate.
In this question Compare two MySQL databases a tool Toad for MySQL has mentioned but I want to do it using PHP.
Cache table schema
products_cache | CREATE TABLE `products_cache` (
`product_num` int(10) unsigned NOT NULL AUTO_INCREMENT,
`cat_id` int(10) unsigned NOT NULL,
`parent_cat_id` int(10) unsigned DEFAULT NULL,
`cat_name` varchar(50) NOT NULL,
`parent_cat_name` varchar(50) DEFAULT NULL,
`product_desc` text NOT NULL,
`price` float(10) unsigned NOT NULL,
`image` varchar(65) NOT NULL DEFAULT '',
`product_date` DATE DEFAULT NULL,
`availability` tinyint(1) NOT NULL DEFAULT '1',
PRIMARY KEY (`product_num`),
) ENGINE=MyISAM
Possible solution
Compute the md5 of the fields and store it in cache table and then next time check the md5 in cache table if data is changed. It will work fine except there will be performance issue (I run cache fixer every month, so I think I can compromise with that). Please comment on this.
Instead of computing MD5 sums for all of your data every month you could simply record changes to a table using triggers.
CREATE TABLE changes (
table char(30) NOT NULL, -- TODO use an enum for better performance
id int NOT NULL,
UNIQUE KEY tableId (table, id),
)
CREATE TRIGGER insert_products AFTER INSERT ON products FOR EACH ROW INSERT IGNORE INTO changes (table, id) values ("products", OLD.id);
CREATE TRIGGER update_products AFTER UPDATE ON products FOR EACH ROW INSERT IGNORE INTO changes (table, id) values ("products", OLD.id);
CREATE TRIGGER delete_products AFTER DELETE ON products FOR EACH ROW INSERT IGNORE INTO changes (table, id) values ("products", OLD.id);
CREATE TRIGGER insert_product_categories AFTER INSERT ON product_categories FOR EACH ROW INSERT IGNORE INTO changes (table, id) values ("product_categories", OLD.id);
CREATE TRIGGER update_product_categories AFTER UPDATE ON product_categories FOR EACH ROW INSERT IGNORE INTO changes (table, id) values ("product_categories", OLD.id);
CREATE TRIGGER delete_product_categories AFTER DELETE ON product_categories FOR EACH ROW INSERT IGNORE INTO changes (table, id) values ("product_categories", OLD.id);
-- do this for every involved table
once in a while, you could than update changed rows (in a nightly batch job) (pseudo code):
for {table,id} in query(select table, id from changes) {
cacheRow = buildCacheRow($table, $id)
doInTransaction {
query(replace into product_cache values $cacheRow)
query(delete from changes where table = $table and id = $id)
}
}
Related
I got a MySQL database with some tables.
In one of these tables i want to insert by a SQL script some new rows.
Unfortunately i have to insert in two columns an empty string and the two columns are part of an unique key for that table.
So i tried to set UNIQUE_CHECKS before and after the insert, but i'm getting errors because of duplicate entries.
Here is the definition of the table:
CREATE TABLE `Table_A` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`name` varchar(100) NOT NULL,
`number` varchar(25) DEFAULT NULL,
`changedBy` varchar(150) DEFAULT NULL,
`changeDate` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
PRIMARY KEY (`id`),
UNIQUE KEY `name` (`name`,`number`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
And the INSERT statement which causes error:
SET UNIQUE_CHECKS = 0;
INSERT INTO `Table_A`
(`name`, `number`, `changedBy`, `changeDate`)
SELECT DISTINCT '', 'myUser', CURRENT_TIMESTAMP
FROM Table_A
AND id NOT IN
(
SELECT DISTINCT id
FROM Table_A
);
SET UNIQUE_CHECKS = 1;
As You can see, i'm using UNIQUE_CHECKS.
But as i said this doesn't work properly.
Any help or suggestion would be appreciated.
Patrick
Switching off Unique Keys for the insert operation doesn't indicate that it will check uniqueness only for the operations that happen after you switch it on again. It just means that database will not waste time to check the constraint during the time it is switch off but it will check the constraint when you switch it on again.
What it measn is that you nead to ensure that column has unique value in a columns with Unique Keys before you can turn it on. Which you don't do.
If you want to maintain Uniqueness somehow for new records you insert after some point in time you would need to create trigger and manually check the new records against already existing data. The same possibly goes for updates. But I don't recommend it - you should probably redesign data so either the Unique Key is not there or the data is truly unique for all the records there are and will be.
Given the following table:
DROP TABLE IF EXISTS my_table;
CREATE TABLE IF NOT EXISTS my_table(
id INT NOT NULL,
timestamp TIMESTAMP(3) DEFAULT CURRENT_TIMESTAMP(3) NOT NULL,
data BLOB NULL,
PRIMARY KEY (id)
);
I can insert on it with:
INSERT INTO my_table (timestamp, data) VALUES
('2014-07-11 11:25:48.185', LOAD_FILE('sql/file.bin'));
In the above insert I was not enforced to insert the id field.
How may I create the table (my_table) so that it prevents inserts without id?
I would every insert to be made (providing the id) like, i.e.:
INSERT INTO my_table (id, timestamp, data) VALUES
(7, '2014-07-11 11:25:48.185', LOAD_FILE('sql/file.bin'));
I was thinking NOT NULL was there for it.
To prevent inserts with an empty value for ID (or not value passed), simply define the column as NOT NULL as you defined it.
I can't see how your example worked (i.e. inserting only into (timestamp, data)).
Now, the fact that there is another table with a trigger that inserts in this one does not have any effect on the ID column of this table. If you define it as AUTO_INCREMENT, whenever you insert a new row, the ID will automatically get a new value which will be fully independent from any data of the first table.
You can have as many tables as you wish with auto-incremented fields, each running a different sequence (and hence their numbering will be fully independent).
To summarize:
CREATE TABLE IF NOT EXISTS my_table(
id INT NOT NULL AUTO_INCREMENT ,
timestamp TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP(3) ,
data BLOB NULL ,
PRIMARY KEY (id)
);
We are using a table which has schema like following:-
CREATE TABLE `user_subscription` (
`ID` varchar(40) NOT NULL,
`COL1` varchar(40) NOT NULL,
`COL2` varchar(30) NOT NULL,
`COL3` datetime NOT NULL,
`COL4` datetime NOT NULL,
`ARCHIVE` tinyint(1) NOT NULL DEFAULT '0',
PRIMARY KEY (`ID`)
)
Now we wanted to do partition on column ARCHIVE. ARCHIVE can have only 2 values 0 or 1 and so 2 partitions.
Actually in our case, we are using partitioning as a Archival process. To do partition, we need to make ARCHIVE column as a part of primary key. But the problem here is that 2 rows can have same ID with different ARCHIVE column value. Actually thats not the main problem for us as 2 rows will be in different partitions. Problem is when we will update the archive column value of one of them to other to move one of the row to archive partition, then it will not allow us to update the entry giving "Duplicate Error".
Can somebody help in this regard?
Unfortunately,
A UNIQUE INDEX (or a PRIMARY KEY) must include all columns in the table's partitioning function
and since MySQL does not support check constraints either, the only ugly workaround I can think of is enforcing the uniqueness manually though triggers:
CREATE TABLE t (
id INT NOT NULL,
archived TINYINT(1) NOT NULL DEFAULT 0,
PRIMARY KEY (id, archived), -- required by MySQL limitation on partitioning
)
PARTITION BY LIST(archived) (
PARTITION pActive VALUES IN (0),
PARTITION pArchived VALUES IN (1)
);
CREATE TRIGGER tInsert
BEFORE INSERT ON t FOR EACH ROW
CALL checkUnique(NEW.id);
CREATE TRIGGER tUpdate
BEFORE UPDATE ON t FOR EACH ROW
CALL checkUnique(NEW.id);
DELIMITER //
CREATE PROCEDURE checkUnique(pId INT)
BEGIN
DECLARE flag INT;
DECLARE message VARCHAR(50);
SELECT id INTO flag FROM t WHERE id = pId;
IF flag IS NOT NULL THEN
-- the below tries to mimic the error raised
-- by a regular UNIQUE constraint violation
SET message = CONCAT("Duplicate entry '", pId, "'");
SIGNAL SQLSTATE "23000" SET
MYSQL_ERRNO = 1062,
MESSAGE_TEXT = message,
COLUMN_NAME = "id";
END IF;
END //
(fiddle)
MySQL's limitations on partitioning being such a downer (in particular its lack of support for foreign keys), I would advise against using it altogether until the table grows so large that it becomes an actual concern.
I have two MySQL tables and want to insert multiple records instead of creating one by one, get id and insert related records
here are the tables:
CREATE TABLE `visit` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`ip_address` varchar(20) DEFAULT NULL,
PRIMARY KEY (`id`)
)
CREATE TABLE `visitmeta` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`page_visit_id` int(11) NOT NULL,
`key` varchar(255) NOT NULL,
`value` varchar(255) NOT NULL,
PRIMARY KEY (`id`)
)
Currently I insert one record on visit, get its id and insert records on visit meta. Is there a way to create a new record into visit and in the same query create visit meta records?
It's not possible to insert records in two tables with a single query, but you can do it in just two queries using MySQL's LAST_INSERT_ID() function:
INSERT INTO visit
(ip_address)
VALUES
('1.2.3.4')
;
INSERT INTO visitmeta
(page_visit_id, key, value)
VALUES
(LAST_INSERT_ID(), 'foo', 'bar'),
(LAST_INSERT_ID(), 'baz', 'qux')
;
Note also that it's often more convenient/performant to store IP addresses in their raw, four-byte binary form (one can use MySQL's INET_ATON() and INET_NTOA() functions to convert to/from such form respectively).
I have a table called promotion_codes
CREATE TABLE promotion_codes (
id int(10) UNSIGNED NOT NULL auto_increment,
created_at datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
code varchar(255) NOT NULL,
order_id int(10) UNSIGNED NULL DEFAULT NULL,
allocated_at datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
PRIMARY KEY (id)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
This table is pre-populated with available codes that will be assigned to orders that meet a specific criteria.
What I need to ensure is that after the ORDER is created, that I obtain an available promotion code and update its record to reflect that it has been allocated.
I am not 100% sure how to not grab the same record twice if simultaneous requests come in.
I have tried locking the row during a select and locking the row during a update - both still seem to allow a second (simultaneous) attempt to grab the same record - which is what I want to avoid
UPDATE promotion_code
SET allocated_at = "' . $db_now . '", order_id = ' . $donation->id . '
WHERE order_id IS NULL LIMIT 1
You can add a second table which holds all used codes. So you can use an unique constraint in the assignment table to make sure that one code is not assigned twice.
CREATE TABLE `used_codes` (`usage` INTEGER PRIMARY KEY auto_increment,
`id` INTEGER NOT NULL UNIQ, -- This makes sure, that there are no two assignments of one code
allocated_at datetime NOT NULL);
You add the ID of an used code into the used_codes table, and query which code you used afterwards. When this two operations are in one transaction, the entire transaction will fail when there is a second try to use the same code.
I did not test the following code, you might to adjust it.
Also you need to make sure that you have your server meets the requirements for transactions.
-- There are changes which have to be atomic, so don't use autocommit
SET autocommit = 0;
BEGIN TRANSACTION
INSERT INTO `used_codes` (`id`, `allocated_at`) VALUES
(SELECT `id` FROM `promotion_codes`
WHERE NOT `id` in (SELECT `id` FROM `used_codes`)
LIMIT 1), now());
SELECT `code` FROM `promotion_codes` WHERE `id` =
-- You might need to adjust the extraction of insertion ID, since
-- I don't know if parallel running transactions can see the maximum
-- their maximum IDs. But there should be a way to extract the last assigned
-- ID within this transaction.
(SELECT `id` FROM `used_codes` HAVING `usage` = max(`usage`));
COMMIT
You can use the returned code if the transaction sucseeded. If there where more than one processes running to use the same code, only one of them succed, while the rest fails with insert errors about the duplicated row. In your software you need to distinguish between the duplicated row error and other errors, and reexecute the statement on duplication errors.