Unique fields on mySQL table - generating promo codes - mysql

I am developing a PHP script and I have a table like this:
TABLE_CODE
code varchar 8
name varchar 30
this code column has to be a code using random letters from A to Z and characters from 0 to 9 and has to be unique. all uppercase. Something like
A4RTX33Z
I have create a method to generate this code using PHP. But this is a intensive task because I have to query the database to see if the generated code is unique before proceeding and the table may have a lot of records.
Because I know mySQL is a bag of tricks but not having advanced knowledge about it now, I wonder if there's some mechanism that could be built in a table to run a script (or something) every time a new record in created on that table to fill the code column with a unique value.
thanks
edit: What I wonder is if there's a way to created the code on-the-fly, as the record is being added to the table and that code being unique.

Better generate these codes in SQL. This is 8-character random "Promo code generator":
INSERT IGNORE INTO
TABLE_CODE(name, code)
VALUES(
UPPER(SUBSTRING(MD5(RAND()) FROM 1 FOR 8)), -- random 8 characters fixed length
'your code name'
)
Add UNIQUE on code field as #JW suggested, and some error-handling in PHP, because sometimes generated value may be not UNIQUE, and MySQL will raise error in that situation.

Adding a UNIQUE constraint on the code column is the first thing you would need to do. Then, to insert the code I would write a small loop like this:
// INSERT IGNORE will not generate an error if the code already exists
// rather, the affected rows will be 0.
$stmt = $db->prepare('INSERT IGNORE INTO table_code (code, name) VALUES (?, ?)');
$name = 'whatever name';
do {
$code = func_to_generate_code();
$stmt->execute(array($code, $name));
} while (!$stmt->rowCount()); // repeat until at least one row affected
As the table grows the number of loops may increase, so if you feel it should only try three times, you could add it as a loop condition and throw an error if that happens.
Btw, I would suggest using transactions to make sure if an error occurs after the code generation, rolling back will make sure the code is removed (can be reused).

Related

Insert a set specified as a comma separated string into a table

Concept
3 Tables:
Events (INT EventRid, Title, Desc, ....)
Participants (INT ParticipantRid, FirstName, LastName,Address....)
ParticipantEventMap(INT refEventRid,INT refParticipantRid)
Application (without significant re-write) will attempt to submit the data about the event (1 field per Event table field PLUS a field 'Participants' which is a comma separated list of ParticipantRids). The fields in the events table are easy to add/update but I seek a means of submitting a query which will do something along the lines of:
INSERT INTO ParticipantEventMap (refEventRid,refParticipantRid)
VALUES (10003211,(Participants));
Of course this is totally invalid SQL syntax, the idea being that it would expand (10003211,(Participants)) into (10003211,ParticipantRid[1]),(10003211,ParticipantRid[2]),...
Is there a way do this as an SQL query, or am I required to perform all mangling on the PHP side before submitting separate queries?
INSERT IGNORE INTO ParticipantEventMap (
SELECT
1002324 as refProgramEventRid,
ParticipantRid as refParticipantRid
FROM Participants where ParticipantRid in (1,2,3,4));
Thus by replacing 1002324 with {EventID} and 1,2,3,4 with {Participants} in the PHP prepared statement I get the desired result! For those interested the app I'm trying to make this work in is DHTMLx Scheduler module.

Delete duplicate rows, do not preserve one row

I need a query that goes through each entry in a database, checks if a single value is duplicated elsewhere in the database, and if it is - deletes both entries (or all, if more than two).
Problem is the entries are URLs, up to 255 characters, with no way of identifying the row. Some existing answers on Stack Overflow do not work for me due to performance limitations, or they use uniqueid which obviously won't work when dealing with a string.
Long Version:
I have two databases containing URLs (and only URLs). One database has around 3,000 urls and the other around 1,000.
However, a large majority of the 1,000 urls were taken from the 3,000 url database. I need to merge the 1,000 into the 3,000 as new entries only.
For this, I made a third database with combined URLs from both tables, about 4,000 entries. I need to find all duplicate entries in this database and delete them (Both of them, without leaving either).
I have followed the query of a few examples on this site, but whenever I try to delete both entries it ends up deleting all the entries, or giving sql errors.
Alternatively:
I have two databases, each containing the separate database. I need to check each row from one database against the other to find any that aren't duplicates, and then add those to a third database.
Since you were looking for a SQL solution here is one. Lets assume that your table has a single column for simplicity sake. However this will work for any number of fields of course:
CREATE TABLE `allkindsofvalues` (
`value` int(11) NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
The following series of queries will accomplish what you are looking for:
CREATE TABLE allkindsofvalues_temp LIKE allkindsofvalues;
INSERT INTO allkindsofvalues_temp SELECT * FROM allkindsofvalues akv1 WHERE (SELECT COUNT(*) FROM allkindsofvalues akv2 WHERE akv1.value = akv2.value) = 1;
DROP TABLE allkindsofvalues;
RENAME TABLE allkindsofvalues_temp to allkindsofvalues;
The OP wrote:
I've got my own PHP solution which is pretty hacky, but works.
I went with a PHP script to accomplish this, as I'm more familiar with PHP than MySQL.
This generates a simple list of urls that only exist in the target
database, but not both. If you have more than 7,000 entries to parse
this may take awhile, and you will need to copy/paste the results
into a text file or expand the script to store them back into a
database.
I'm just doing it manually to save time.
Note: Uses MeekroDB
<pre>
<?php
require('meekrodb.2.1.class.php');
DB::$user = 'root';
DB::$password = '';
DB::$dbName = 'testdb';
$all = DB::query('SELECT * FROM old_urls LIMIT 7000');
foreach($all as $row) {
$test = DB::query('SELECT url FROM new_urls WHERE url=%s',
$row['url']);
if (!is_array($test)) {
echo $row['url'] . "\n";
}else{
if (count($test) == 0) {
echo $row['url'] . "\n";
}
}
}
?>
</pre>

MySQL only inserting first row

I'm trying to insert a ton of rows into my MySQL database. I have a query like this, but with about 700 more repetitive entries in it but for some reason the query is only inserting the first row to the database. In this case it would be '374','4957','0'.
INSERT INTO table VALUES ('374','4957','0'),('374','3834','0'),('374','4958','0'),('374','5076','0'),('374','4921','0'),('374','3835','0'),('374','4922','0'),('374','3836','0'),('374','3837','0'),('374','4879','0'),('374','3838','0')
I can't figure out what I'm doing wrong.
Thank you in advance.
Don't mean to state the obvious, but if the first field '374' is your primary key field, than this is the issue.
Otherwise, are there any error messages received from the database? That is always a good place to look for bugs.
For better understanding why something is not working next time use code like this:
$sql = "INSERT INTO table VALUES ('374','4957','0'),('374','3834','0')";
if (!mysqli_query($link, $sql)) {
printf("Errormessage: %s\n", mysqli_error($link));
}
That should display error message returned from MySQL.
More information: PHP manual - mysqli_error
Try to write the column names before values.
For example:
INSERT INTO table (column1,column2,column3) VALUES ...

Update MySQL without specifying column names

I want to update a mysql row, but I do not want to specify all the column names.
The table has 9 rows and I always want to update the last 7 rows in the right order.
These are the Fields
id
projectid
fangate
home
thanks
overview
winner
modules.wallPost
modules.overviewParticipant
Is there any way I can update the last few records without specifying their names?
With an INSERT statement this can be done pretty easily by doing this:
INSERT INTO `settings`
VALUES (NULL, ...field values...)
So I was hoping I could do something like this:
UPDATE `settings`
VALUES (NULL, ...field values...)
WHERE ...statement...
But unfortunately that doesn't work.
If the two first columns make up the primary key (or a unique index) you could use replace
So basically instead of writing
UPDATE settings
SET fangate = $fangate,
home = $home,
thanks = $thanks
overview = $overview,
winner = $winner,
modules.wallPost = $modules.wallPost,
modules.overviewParticipant = $modules.overviewParticipant
WHERE id = $id AND procjectId = $projectId
You will write
REPLACE INTO settings
VALUES ($id,
$projectId,
$fangate,
$home,
$thanks
$overview,
$winner,
$modules.wallPost,
$modules.overviewParticipant)
Of course this only works if the row already exist, otherwise it will be created. Also, it will cause a DELETE and an INSERT behind the scene, if that matters.
You can't. You always have to specify the column names, because UPDATE doesn't edit a whole row, it edits specified columns.
Here's a link with the UPDATE syntax:
http://dev.mysql.com/doc/refman/5.0/en/update.html
No, it works on the INSERT because even if you didn't specify the column name but you have supplied all values in the VALUE clause. Now, in UPDATE, you need to specify which column name will the value be associated.
UPDATE syntax requires the column names that will be modified.
Are you always updating the same table and columns?
In that case one way would be to define a stored procedure in your schema.
That way you could just do:
CALL update_settings(id, projectid, values_of_last_7 ..);
Although you would have to create the procedure, check the Mysql web pages for how to do this, eg:
http://docs.oracle.com/cd/E17952_01/refman-5.0-en/create-procedure.html
I'm afraid you can't afford not specifying the column names.
You can refer to the update documentation here.

What's the fastest way to check if a URL already exists in a MySQL table? [duplicate]

This question already has an answer here:
If url already exists in url table in mysql. Break operation in php script
(1 answer)
Closed 9 months ago.
I have a varchar(255) column where I store URL's in a MySQL database. This column has a unique index.
When my crawler encounters a URL, it has to check the database to see if that URL already exists. If it exists, the crawler selects data about that entry. If it does not exist, the crawler adds the url. I currently do this with the following code:
$sql = "SELECT id, junk
FROM files
WHERE url = '$url'";
$results = $this->mysqli->query( $sql );
// the file already exists in the system
if( $results->num_rows > 0 )
{
// store data to variables
}
// the file does not exists yet... add it
else
{
// insert new file
$sql = "INSERT INTO files( url )
VALUES( '$url' )";
$results = $this->mysqli->query( $sql );
}
I realize there are lots of ways to do this. I've read that using a MySQL if/else statement could speed this up. Can someone explain how MySQL would handle that differently, and why that may be faster? Are there other alternatives I should test? My crawlers are doing a lot of checking like this, and speeding up this process could be a significant speed boost for my system.
First of all, URLs are going to get much longer than varchar(256).
Second of all, because they're that long you don't want to do string compares, it gets very slow as the table grows. Instead, create a column with a hash value and compare that.
You should index the hash column, of course.
As for the actual insert, an alternative is to put a unique constraint on the hash. Then do your inserts blindly, allowing SQL to reject the dupes. (But you'll have to put an exception handler into your code, which has its own overhead.)
Considering not using transactions, to insert a new row if an old row does not exist by the WHERE condition, you can use:
"INSERT INTO files( url ) VALUES ( $url ) WHERE NOT EXISTS ( SELECT * FROM files WHERE url = $url );"
I can't think of a "one-line-commond" to select and insert at the same time.
I would do the insert first and check for success(affected_rows), then select. If you check first, and then do the insert, the possibility exists that the url got inserted during that small time window. And, you would need to add more code to handle this situation.