Insert into table if two keys do not exist - mysql

I have a table that maps itemId(item.id) and tagId(tag.id).
I'm writing a query that needs to insert a new row, but only if BOTH itemId and tagId do not already have an entry. What's the best way to do this?
Here is my current query that doesn't already check for an existing match:
// create tag-item maps
$sql = 'INSERT INTO item_tag
SELECT :itemId AS iid, tag.id AS tid
FROM tag
WHERE tag.name=:name';
try{
$stmt = $db->dbh->prepare($sql);
for($i = 0; $i < $count; $i++) {
$data = array(':itemId' => $itemId, ':name' => $tags[$i]);
$result = $stmt->execute($data);
if($result !== false) {
// Do nothing
}else {
return false;
}
}
return true;
}catch(PDOException $e) {
logit($e->getMessage());
return false;
}

A UNIQUE constraint would be the ideal solution, and according to the docs the MyISAM engine supports them.
ALTER TABLE item_tag ADD CONSTRAINT item_tag_iid_tid_uq UNIQUE (iid, tid);
Then if your INSERT tries to add a duplicate (iid, tid) pair it will throw and you can handle it in your catch code.
If you don't want to do the UNIQUE constraint, you could instead add an "existence" check to your INSERT query:
INSERT INTO item_tag
SELECT :itemId AS iid, tag.id AS tid
FROM tag
WHERE tag.name=:name' AND NOT EXISTS (
SELECT * FROM item_tag WHERE iid=:itemId AND tid=tag.id)
I tested this using my own differently-named tables, so hopefully I "translated" it correctly to your table/column names.

Related

MySQL set AUTO_INCREMENT value to MAX(id) + 1 shortcut?

I constantly need to "reset" the AUTO_INCREMENT value of my tables, after I delete a part of my rows. Let me explain with an actual example :
I have a table called CLIENT. Let us say before removing some rows, the auto_increment was set to 11. Then I delete the 4 lasts rows. The auto_increment is still set to 11. So when I will insert some clients again, it will make a hole of id.
I always need to "clean" the auto_increment, e.g. using this function below :
function cleanAutoIncrement($tableName, $columnAutoIncrement, $pdo)
{
$r = false;
try {
$p = $pdo->prepare("SELECT IFNULL(MAX($columnAutoIncrement) + 1, 1) AS 'max' FROM $tableName LIMIT 1;");
$p->execute();
$max = $p->fetch(PDO::FETCH_ASSOC)['max'];
$p = $pdo->prepare("ALTER TABLE $tableName AUTO_INCREMENT = $max;"):
$p->execute();
$r = true;
}
catch(Exception $e) {
$r = false;
}
return $r;
}
What the function do is to get the maximum id in the table, then increments it of 1, and return its value (if there was no rows in table, it return 1). Then I alter the table to reset a "clean" id in order not to let any hole of id.
QUESTION
Is there any MySQL command to perform this task without having to do this manually ?
To close this question, no shortcut exists in MySQL and it is not recommended to perform this task.

Insert Rows with using the result from a group of select best way to do it

I have an array of tags inserted in my database, and I need to insert them in another table with itemID and tagID.
The thing is that i don't have the tagID - instead I have a TagName with the name. I need to get the tagID so I can insert it afterward, but I'm wondering if this is possible to achieve in one single query.
I mean search for the name, then get its tagID and then insert it the array, or even inserting them one by one would work.
if I understood you, my ways is: create two tables first. If you know better way, please let me know.
Table Tag
id keyword
Table C_Tag
id tag_id user_id
Foreign key Tag(id) Reference C_Tag (tag_id)
<?php
$tag=$_POST['tag_name'] //get the tag name
$current_user=$_POST['users_id'];// get the current user id
include 'db_tag.php';
$stmt = $db->prepare("SELECT keyword, id FROM Tag WHERE keyword = :tag ");
$stmt->bindParam(':tag', $tag);
$stmt->execute();
$row_tag = $stmt->fetchALL(PDO::FETCH_ASSOC);
foreach ($row_tag as $row_tag){
}
//if the keyword exist, only update the C_Tag table
if ($row_tag['term'] == $tag){
$stmt = $db->prepare("SELECT t.id, t.keyword, ct.tag_id, ct.user_id FROM Tag t , C_Tag ct WHERE t.id=ct.tag_id ");
$stmt->bindValue(':current_id',$current_user);
$stmt->execute();
$row = $stmt->fetchALL(PDO::FETCH_ASSOC);
foreach ($row as $row){
}
$row_id=$row['id'];
$row_tag_id=$row['tag_id'];
$row_user_id=$row['user_id'];
if( $current_user == $row_user_id && $row_id == ! $row_tag_id ){
//make sure same user cannot enter duplicate keyword
$stmt = $db->prepare ("INSERT INTO C_Tag (user_id, tag_id)
SELECT :current_id, id
FROM tag
WHERE keyword=:keyword
");
$stmt->bindValue(':current_id',$current_user);
$stmt->bindValue(':keyword', $tag);
$stmt->execute();
}
} else {
//if the keyword is new, I will insert it into both table
}
?>
this appears to be the solution
How to copy a row and insert in same table with a autoincrement field in MySQL?
insert into zr1f4_k2_tags_xref (id, tagID, itemID) select NULL, tagID, #itemID from zr1f4_k2_tags where name=#tagName
but i cant make it work.
this are the tables
zr1f4_k2_tags
id int(11)
name varchar(255)
published smallint(6)
zr1f4_k2_tags_xref
id int(11)
tagID int(11)
itemID int(11)
I'm trying to insert the ID of the tag related to the tagname and item ID which ill add explicitly. and the id is autonumeric

MYSQL fetch records from table 1 that do not exist in table 2

I created a php function to fetch records from a sql table subscriptions, and I want to add a condition to mysql_query to ignore the records in table subscriptions that exists in table removed_items, here is my code;
function subscriptions_func($user_id, $limit){
$subs = array();
$sub_query = mysql_query("
SELECT `subscriptions`.`fo_id`, `subscriptions`.`for_id`, `picture`.`since`, `picture`.`user_id`, `picture`.`pic_id`
FROM `subscriptions`
LEFT JOIN `picture`
ON `subscriptions`.`fo_id` = `picture`.`user_id`
WHERE `subscriptions`.`for_id` = $user_id
AND `picture`.`since` > `subscriptions`.`timmp`
GROUP BY `subscriptions`.`fo_id`
ORDER BY MAX(`picture`.`since_id`) DESC
$limit
");
while ($sub_row = mysql_fetch_assoc($sub_query)) {
$subs [] = array(
'fo_id' => $sub_row['fo_id'],
'for_id' => $sub_row['for_id'],
'user_id' => $sub_row['user_id'],
'pic_id' => $sub_row['pic_id'],
'since' => $sub_row['since']
);
}
return $subs ;
}
My solution is to create another function to fetch the records from table removed_items and set a php condition where I call subscriptions_func() to skip/unset the records that resemble the records in subscriptions_func(), as the following
$sub = subscriptions_func($user_id);
foreach($sub as $sub){
$rmv_sub = rmv_items_func($sub[‘pic_id’]);
If($rmv_sub[‘pic_id’] != $sub[‘pic_id’]){
echo $sub[‘pic_id’];
}
}
This solution succeeded to skip the items in the table removed_items however this solution makes gaps in the array stored in the variable $sub which makes plank spots in the echoed items.
Is there a condition I can add to the function subscriptions_func() to cut all the additional conditions and checks?
Assuming id is the primary key of subscriptions and subs_id is the foreign key in removed_items, then you just have to add a condition to the WHERE clause. Something like this should work :
...
AND `subscriptions`.id NOT IN (SELECT `removed_items`.subs_id FROM `removed_items`)
...
Not related to your problem :
Your code seems vulnerable to SQL injection : use prepared statement to prevent this.
The original Mysql API is deprecated, it is highly recommended to switch to Mysqli instead.

Perl with mysql, terribly slow, how to accelerate

unit
id fir_name sec_name
author
id name unit_id
author_paper
id author_id paper_id
I want to unify authors['same author' means the names are the same and their units' fir_names are the same], and I have to change author_paper table at the same time.
Here is what i do:
$conn->do('create index author_name on author (name)');
my $sqr = $conn->prepare("select name from author group by name having count(*) > 1");
$sqr->execute();
while(my #row = $sqr->fetchrow_array()) {
my $dup_name = $row[0];
$dup_name = formatHtml($dup_name);
my $sqr2 = $conn->prepare("select id, unit_id from author where name = '$dup_name'");
$sqr2->execute();
my %fir_name_hash = ();
while(my #row2 = $sqr2->fetchrow_array()) {
my $author_id = $row2[0];
my $unit_id = $row2[1];
my $fir_name = getFirNameInUnit($conn, $unit_id);
if (not exists $fir_name_hash{$fir_name}) {
$fir_name_hash{$fir_name} = []; #anonymous arr reference
}
$x = $fir_name_hash{$fir_name};
push #$x, $author_id;
}
while(my ($fir_name, $author_id_arr) = each(%fir_name_hash)) {
my $count = scalar #$author_id_arr;
if ($count == 1) {next;}
my $author_id = $author_id_arr->[0];
for ($i = 1; $i < $count; $i++) {
#print "$author_id_arr->[$i] => $author_id\n";
unifyAuthorAndAuthorPaperTable($conn, $author_id, $author_id_arr->[$i]); #just delete in author table, and update in author_paper table
}
}
}
select count(*) from author; #240,000
select count(distinct(name)) from author; #7,7000
It is terribly slow!!I've runned it for 5hours, it just removed about 4,0000 dup names.
How to make it run faster.I am eager for your advice
You should not prepare the second sql statement within the loop and you can make real use of the preparation when you use the ? placeholder:
$conn->do('create index author_name on author (name)');
my $sqr = $conn->prepare('select name from author group by name having count(*) > 1');
# ? is the placeholder and the database driver knows if its an integer or a string and
# quotes the input if needed.
my $sqr2 = $conn->prepare('select id, unit_id from author where name = ?');
$sqr->execute();
while(my #row = $sqr->fetchrow_array()) {
my $dup_name = $row[0];
$dup_name = formatHtml($dup_name);
# Now you can reuse the prepared handle with different input
$sqr2->execute( $dup_name );
my %fir_name_hash = ();
while(my #row2 = $sqr2->fetchrow_array()) {
my $author_id = $row2[0];
my $unit_id = $row2[1];
my $fir_name = getFirNameInUnit($conn, $unit_id);
if (not exists $fir_name_hash{$fir_name}) {
$fir_name_hash{$fir_name} = []; #anonymous arr reference
}
$x = $fir_name_hash{$fir_name};
push #$x, $author_id;
}
while(my ($fir_name, $author_id_arr) = each(%fir_name_hash)) {
my $count = scalar #$author_id_arr;
if ($count == 1) {next;}
my $author_id = $author_id_arr->[0];
for ($i = 1; $i < $count; $i++) {
#print "$author_id_arr->[$i] => $author_id\n";
unifyAuthorAndAuthorPaperTable($conn, $author_id, $author_id_arr->[$i]); #just delete in author table, and update in author_paper table
}
}
}
This should speed up things as well.
The moment I see a query and a loop I think that you have a latency problem: you query to get a set of values and then iterate over the set to do something else. That's a LOT of latency if it means a network round trip to the database for each row in the set.
It'd be better if you could do it in a single query using an UPDATE and a sub-select OR if you could batch those requests and perform all of them in one round trip.
You'll get an additional speed up if you use indexes wisely. Every column in a WHERE clause should have an index. Every foreign key should have an index.
I'd run EXPLAIN PLAN on your queries and see if there are any TABLE SCAN going on. If there is, you've got to index properly.
I wonder if a properly designed JOIN would come to your rescue?
240,000 rows in one table and 77,000 in another isn't that large a database.

Insert on first request; update on all subsequent requests

I'm inserting a row into a MySQL database table. On the first insertion I want a new row to be added, but after that I just want that row to be updated. Here's how I'm doing it. An Ajax request calls the following php file:
<?php
include "base.php";
$bookID = $_POST['bookID'];
$shelfID = $_POST['shelfID'];
$userID = $_SESSION['user_id'];
$query = mysql_query("SELECT shelfID FROM shelves WHERE userID = '$userID' AND shelfID = '$shelfID' AND bookID = '$bookID'");
if (mysql_num_rows($query) == 0) {
$insert = "INSERT INTO shelves (bookID,shelfID,userID) VALUES ('$bookID','$shelfID','$userID')";
mysql_query($insert) or die(mysql_error());
} elseif (mysql_num_rows($query) == 1) { //ie row already exists
$update = "UPDATE shelves SET shelfID = '$shelfID' WHERE userID = '$userID' AND bookID = '$bookID'";
mysql_query($update) or die(mysql_error());
}
?>
As it stands it adds a new row every time.
You should consider using PDO for data access. There is a discussion on what you need to do here: PDO Insert on Duplicate Key Update
I'd flag this as duplicate, but that question is specifically discussing PDO.
You can use the INSERT ... ON DUPLICATE KEY UPDATE syntax. As long as you have a unique index on the data set (i.e. userid + shelfid + bookid) you are inserting, it will do an update instead.
http://dev.mysql.com/doc/refman/5.5/en/insert-on-duplicate.html