I have two tables : Shop and Product
Table Shop
(id INT AUTO_INCREMENT,
shop_id INT,
PRIMARY KEY(id)
);
Table Product
(
product_id INT AUTO_INCREMENT,
p_name VARCHAR(100),
p_price INT,
shop_id INT,
PRIMARY KEY(product_id),
FOREIGN KEY(shop_id) REFERENCES Shop(id)
);
On a server using Node and mysql2 package for queries.
On a client side, I'm displaying all Products that are related to specific Shop in a table.
User can change Products, and when he is pressing Save, requests are being made, sending new data, and storing her.
User can either change existing Products, or add new ones.
But i have concerns, how it will behave with a relatively big amount of products per one shop. Let's say there are 1000 of them.
The data that was inserted - marked with the flag saved_in_db=false.
Existing data, that was changed - changed=true.
Considered a few approaches :
On a server, filtering array of records received from a client, INSERT into db newly created, that are not stored yet.
But to UPDATE existing Products, i need to create a bunch of UPDATE Products SET p_name=val_1 WHERE id = ? queries, and execute them at once.
To take all Products with the specified Shop_id, DELETE them, and INSERT a new bulk of data. Not making separation between already existing records, or changed.
In this approach, i see two cons.
First - sending constant amount of data from client to server.
Second - running out of ids in DB. Because if there are 10 shops, with 1000 Products in each, and every user frequently updates records, every update, even if one new record was added, or changed, will increment id by around 1000.
Is it the only way, to update a certain amount of records in DB, executing a bunch of UPDATE queries one after another?
You could INSERT...ON DUPLICATE KEY UPDATE.
INSERT INTO Products (product_id, p_name)
VALUES (123, 'newname1'), (456, 'newname2'), (789, 'newname3'), ...more...
ON DUPLICATE KEY UPDATE p_name = VALUES(p_name);
This does not change the primary key values, it only updates the columns you tell it to.
You must include the product id's in the INSERT VALUES, because that's how it detects that you're inserting a row that already exists in the table.
Related
I have some products, which has it's own ids and i'm designing MySQL DB and then I will import this data, there is much more than product table, but it doesn't matter now.
It's good idea to reuse existing product ids as primary key? So into the autoincerement ID column will be imported existing product ids, I never did that like I'm describing.
It is also worth to mention, that IDs are normal unsigned integer values and also that products are now only some rows in xls sheet.
I think it would be great to keep the IDs as they are if you have any relationship build up upon those IDs, and for the new IDs that will be added just let them increment with the identity property.
To insert defined IDs on an identity column (auto-increment) use the following:
Set Identity_Insert [TableName] On
-- --------------------------------------------
youre insert query goes here
-- --------------------------------------------
Set Identity_Insert [TableName] Off
I am working on an assignment and need your help with the following in SQL database:-
I have 3 tables
Product
LintItem
Invoice
LineItem is a bride table and I need to insert data into LineItem but it requires ProductID and InvoiceNumber.
In my case the Invoice table is emppty and it will be filled from the data that LineItem table passes.
The problem is how can I create an invoice before having the data from the lineItem table?
I am using these table for online shopping cart.
It's really hard for me to explain this problem. Hope you understand it, Thanks!
It sounds like you have a foreign key constraint forcing the existence of a Invoice record prior to inserting your line item records. It is hard to say exactly, based on the phrasing of your question but could be something like.
--Table variable to hold line items
DECLARE #lineItems TABLE
(
InvoiceNumber INT,
Quantity INT
)
INSERT INTO #lineitems VALUES(1,1)
INSERT INTO #lineitems VALUES(1,2)
--ADD INVOICE RECORD FIRST AND SUM Quantities etc....
INSERT INTO Invoice
SELECT InvoiceNumber,SUM(Quantity)
FROM #lineItems
GROUP BY InvoiceNumber
--NOW YOU CAN ADD LINE ITEMS
INSERT INTO LineItems SELECT * FROM #lineItems
This is a pattern you could use if that was your goal.
If you are wanting to insert these LineItems on the fly as the user is clicking Add from the webpage. I wouldn't use your LineItem SQL table for caching this way. Without knowing anything about your application it is hard to say but you really should be caching this data in the HTTP session or in the client as (array,json, local storage etc..). If you were to choose to do this as an SQL table just make a new LineItem without the constraints and then similarly per above you can use that table to insert into your LineItem table.
I have a table called events where all new information goes. This table works as a reference for all queries for news feed(s) so event items are selected from there and information corresponding to that event is retrieved from the correct tables.
Now, here's my problem. I have E_ID's in the events table which correspond to the ID of an event in a different table, be it T_ID for tracks, S_ID for status and so on... These ID's could be the same so for the time being I just used a different auto_increment value for each table so status started on 500 tracks on 0 etc. Obviously, I don't want to do that as I have no idea yet of which table is going to have the most data in it. I would assume status would quickly exceed tracks.
The information is inserted into the event table with triggers. Here's an example of one;
BEGIN
INSERT INTO events (action, E_ID, ID)
VALUES ('has some news.', NEW.S_ID, NEW.ID);
END
That ones for he status table.
Is there an addition to that trigger I can make to ensure the NEW.S_ID != an E_ID currently in events and if it does change the S_ID accordingly.
Alternatively, is there some kind of key I can use to reference events when auto incrementing the S_ID so that the S_ID is not incremented to a value of E_ID.
Those are my thoughts, I think the latter solution would be better but I doubt it is possible or it is but would require another reference table and would be too complex.
It's really uncommon to require a unique id across tables, but here's a solution that will do it.
/* Create a single table to store unique IDs */
CREATE TABLE object_ids (
id INT UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY,
object_type ENUM('event', ...) NOT NULL
) ENGINE=InnoDB;
/* Independent object tables do not auto-increment, and have a FK to the object_ids table */
CREATE TABLE events (
id INT UNSIGNED NOT NULL PRIMARY KEY,
...
CONSTRAINT FOREIGN KEY (id) REFERENCES object_ids (id)
) ENGINE=InnoDB;
/* When creating a new record, first insert your object type into the object_ids table */
INSERT INTO object_ids(object_type) VALUES ('event');
/* Then, get the auto-increment id. */
SET #id = LAST_INSERT_ID();
/* And finally, create your object record. */
INSERT INTO events (id, ...) VALUES (#id, ...);
Obviously, you would duplicate the structure of the events table for your other tables.
You could also just use a Universal Unique Identifier (UUID).
A UUID is designed as a number that is globally unique in space and time. Two calls to UUID() are expected to generate two different values, even if these calls are performed on two separate computers that are not connected to each other.
Please read more about it in the manual.
There's also a shorter version.
UUID_SHORT() should do the trick.
It will generate 64-bit unsigned integers for you.
According to the doc the generator logic is:
(server_id & 255) << 56
+ (server_startup_time_in_seconds << 24)
+ incremented_variable++;
The value of UUID_SHORT() is guaranteed to be unique if the following conditions hold:
The server_id value of the current server is between 0 and 255 and is unique among your set of master and slave servers
You do not set back the system time for your server host between mysqld restarts
You invoke UUID_SHORT() on average fewer than 16 million times per second between mysqld restarts
mysql> SELECT UUID_SHORT();
-> 92395783831158784
If you curious what is your server id you can use either of these:
SELECT ##server_id
SHOW VARIABLES LIKE 'server_id';
I have a stocks table (for products/stocks of a retail store) and a serials table (barcodes issued for each stock).
Basically when new stocks are introduced to the databases, the system issues a serial number for each stock... based on the index/pri autoincrement value of the serials table.
Problem is they both depend on each other...
I'll explain:
STOCKS TABLE
stock_id int(11)
product_name varchar(50)
serial int(30) <--- relies on the serials generated by system, stored in the SERIALS TABLE
SERIALS
sn_id int(11)
stock_id int(11) <-- relies on the new stocks inserted in the stocks table
serial int(30) <---- serial NO generated for specific stock.
Where STOCKS inserted needs to store the Serial Number generated for it,
as well as the SERIALS generated must be recorded in the table w/ the stock_id (index/pri) of the stocks being inserted..
This basically means 3 SQL statements / new stock:
get the next auto inc value of serials table (used to generate the serials properly)
insert the stocks into the table with the serials for each
get the insert_id of the said stock and insert that into the serials table
This works but I'm wondering if there's a better approach? So far here's what I got running:
create a serial_lock file on the home directory (this prevents other scripts from issuing new serial numbers to other stocks , = avoiding conflict on concurrent runs..
GENERATE required Serial Nos by getting the next auto_increment value of the serials table and store this in variable for now e.g.
$assigned_serials_array[$index] = $prefix . $index; // results in BN-0001 ("BN-" is the prefix and the rest is padded auto inc value incremented per loop
INSERT INTO stocks , each stock , get the insert_ID
INSERT INTO serials, a record of the serial being issued to that specific stock
after loop is done, delete the lock file
PS.
my original actually does an INSERT already to the serials table, and then does an update on that serials table after a stock_id is generated.. I didn't feel comfortable with that one because of another SQL statement being issued, although it's the safest way though and I don't need to worry about lock file and conflicts.
hmmmm.. any thoughts?
EDIT:
I decided to change my method..
for each SERIAL GENERATED, is a STOCK (stock_id).. I decided to forget about the incremental sequencing of serial numbers 00001 0002 0003
Decided to go ahead and use the stock_id of the specific stock being issued an SN..
so..
get next insert id, generate SN based on that,
INSERT STOCK , w/ generated SN
INSERT SERIAL record, referencing the stock_id to the same next insert id as well..
Done!
I just really wanted to have a perfectly sequenced SN ..
Do not create lock files - this is just wrong.
Instead, DO use transactions. This example in Perl:
my $dbh = DBI->connect("dbi:mysql...", "user", "password");
$dbh->begin_work(); # start new transaction
$dbh->do("INSERT INTO serials ..."); # generate new serial
my $new_serial = $dbh->{mysql_insertid};
$dbh->do("INSERT INTO stocks (..., serialno) VALUES (..., $new_serial)");
# do some more work like inserting into other tables
$dbh->commit(); # finally, commit the transaction
Note that you need to use InnoDB engine for transactions to work
I am trying to run a query:
INSERT
INTO `ProductState` (`ProductId`, `ChangedOn`, `State`)
SELECT t.`ProductId`, t.`ProcessedOn`, \'Activated\'
FROM `tmpImport` t
LEFT JOIN `Product` p
ON t.`ProductId` = p.`Id`
WHERE p.`Id` IS NULL
ON DUPLICATE KEY UPDATE
`ChangedOn` = VALUES(`ChangedOn`)
(I am not quite sure the query is correct, but it appears to be working), however I am running into the following issue. I am running this query before creating the entry into the 'Products' table and am getting a foreign key constraint problem due to the fact that the entry is not in the Products table yet.
My question is, is there a way to run this query, but wait until the next query (which updates the Product table) before performing the insert portion of the query above? Also to note, if the query is run after the Product entry is created it will no longer see the p.Id as being null and therefore failing so it has to be performed before the Product entry is created.
---> Edit <---
The concept I am trying to achieve is as follows:
For starters I am importing a set of data into a temp table, the Product table is a list of all products that are (or have been in the past) added through the set of data from the temp table. What I need is a separate table that provides a state change to the product as sometimes the product will become unavailable (no longer in the data set provided by the vendor).
The ProductState table is as follows:
CREATE TABLE IF NOT EXISTS `ProductState` (
`ProductId` VARCHAR(32) NOT NULL ,
`ChangedOn` DATE NOT NULL ,
`State` ENUM('Activated','Deactivated') NULL ,
PRIMARY KEY (`ProductId`, `ChangedOn`) ,
INDEX `fk_ProductState_Product` (`ProductId` ASC) ,
CONSTRAINT `fk_ProductState_Product`
FOREIGN KEY (`ProductId` )
REFERENCES `Product` (`Id` )
ON DELETE NO ACTION
ON UPDATE NO ACTION)
ENGINE = InnoDB
DEFAULT CHARACTER SET = utf8
COLLATE = utf8_general_ci;
The foreign key is an identifying relationship with the Product table (Product.Id)
Essentially what I am trying to accomplish is this:
1. Anytime a new product (or previously deactivated product) shows up in the vendor data set, the record is created in the ProductState table as 'Activated'.
2. Anytime a product (that is activated), does not show up in the vendor data set, the record is created as 'Deactivated' in the ProductState table.
The purpose of the ProductState table is to track activation and deactivation states of a product. Also the ProductState is a Multi-To-One relationship with the Product Table, and the state of the product will only change once daily, therefore my PKEY would be ProductId and ChangedDate.
With foreign keys, you definitely need to have the data on the Product table first, before entering the state, think about it with this logic: "How can something that dont exist have a state" ?
So pseudocode of what you should do:
Read in the vendor's product list
Compare them to the existing list in your Product table
If new ones found: 3.1 Insert it to Product table, 3.2 Insert it to ProductState table
If missing from vendor's list: 4.1 Insert it to ProductState table
All these should be done in 1 transaction. Note that you should NOT delete things from Product table, unless you really want to delete every information associated with it, ie. also delete all the "states" that you have stored.
Rather than trying to do this all in 1 query - best bet is to create a stored procedure that does the work as step-by-step above. I think it gets overly complicated (or in this case, probably impossible) to do all in 1 query.
Edit: Something like this:
CREATE PROCEDURE `some_procedure_name` ()
BEGIN
-- Breakdown the tmpImport table to 2 tables: new and removed
SELECT * INTO _temp_new_products
FROM`tmpImport` t
LEFT JOIN `Product` p
ON t.`ProductId` = p.`Id`
WHERE p.`Id` IS NULL
SELECT * INTO _temp_removed_products
FROM `Product` p
LEFT JOIN `tmpImport` t
ON t.`ProductId` = p.`Id`
WHERE t.`ProductId` IS NULL
-- For each entry in _temp_new_products:
-- 1. Insert into Product table
-- 2. Insert into ProductState table 'activated'
-- For each entry in _temp_removed_products:
-- 1. Insert into ProductState table 'deactivated'
-- drop the temporary tables
DROP TABLE _temp_new_products
DROP TABLE _temp_removed_products
END
I think you should:
start a transaction
do your insert into the Products table
do your insert into the ProductState table
commit the transaction
This will avoid any foreign key errors, but will also make sure your data is always accurate. You do not want to 'avoid' the foreign key constraint in any way, and InnoDB (which I'm sure you are using) never defers these constraints unless you turn them off completely.
Also no you cannot insert into multiple tables in one INSERT ... SELECT statement.