This is my table tb_currency_minimum_amount
When I am inserting a record using
INSERT IGNORE INTO tb_currency_minimum_amount ( id, currency_id, payment_method, minimum_amount) VALUES (NULL, 1, 16, 0.02)
again and again it creates a new entry
rows with id 32,33,34 are same
It i assigning it id 32 not 27 and ignoring duplicates
2:
The purpose of an AUTO_INCREMENT column is to ensure unique identifiers for the rows in the table.
It is just an implementation detail that it uses consecutive integer values. It is not a requirement for these values to be consecutive. Your code must not rely on the values being consecutive.
The value of the AUTO_INCREMENT is increased by each INSERT statement when a value for that column is not provided, no matter if the updated value is used (the query creates a new row) or not (it fails because of a duplicate key constraint or it updates an existing row because of ON DUPLICATE KEY).
Auto increment is count every record with +1. But it does not check of other columns are unique or duplicates.
In your id column which is auto increment I do not see any duplicates.
Could you elaborate your purpose what you trying to archive.
It is adding assigning it id 32 not 27 and ignoring duplicates
If i understand your question correctly, you should use the SQL SELECT DISTINCT Statement.
The SELECT DISTINCT statement is used to return only distinct (different) values.
Inside a table, a column often contains many duplicate values; and sometimes you only want to list the different (distinct) values.
The SELECT DISTINCT statement is used to return only distinct (different) values.
Related
I want to insert multiple rows into a table, using a single INSERT statement. This is no problem, since SQL offers the option to provide multiple rows as parameter for a single INSERT statement. Now, those rows contain an ID field that is incremented automatically, i.e. its value is set by the database, not by my code.
As a result, I would like to get the ID values of the inserted rows. My basic question is: How do I do that for MariaDB / MySQL?
As it turns out, this is pretty simple, e.g. in PostgreSQL, as PostgreSQL has the RETURNING clause for INSERT which returns the desired values for one or even for multiple rows. This is exactly what I want and it works.
Unfortunately, neither MariaDB nor MySQL have PostgreSQL's RETURNING clause, so I need to fallback to something such as LAST_INSERT_ID(), but this only returns the ID of the single last inserted row, even if multiple rows were inserted using a single INSERT. How do I get all the ID values?
My code currently looks like this:
INSERT INTO mytable
(foo, bar)
VALUES
('fooA', 'barA'),
('fooB', 'barB');
SELECT LAST_INSERT_ID() AS id;
How can I solve this issue in a way that works even with concurrent writes?
(And no, it's not an option to change to a UUID field, or something like this; the auto-increment field is given, and can not be changed.)
MySQL & MariaDB have the LAST_INSERT_ID() function, and it returns the id generated by the most recent INSERT statement in your current session.
But when your INSERT statement inserts multiple rows, LAST_INSERT_ID() returns the first id in the set generated.
In such a batch of multiple rows, you can rely on the subsequent id's being consecutive. The MySQL JDBC driver depends on this, for example.
If the rows you insert include a mix of NULL and non-NULL values for the id column, you have a risk of messing up this assumption. The JDBC driver returns the wrong values for the set of generated id's.
As stated in the comments, you can capture the inserted IDs (SQL Server):
use tempdb
create table test (
id int identity(1,1) primary key,
t varchar(10) null
)
create table ids (
i int not null
)
insert test(t)
output inserted.id into ids
values (null), (null), (null)
select *
from test
select *
from ids
I have a lists table that has an order field.
When I insert a new record, is it possible to find the order of the previous row and increment the new row?
Or should I go about it myself in PHP by doing an OrderBy('order') query and getting the max() value of that?
When you declare a table with MySQL you can use an auto-increment id so you won't have to deal about its incrementation:
CREATE TABLE people (
id MEDIUMINT NOT NULL AUTO_INCREMENT,
name CHAR(30) NOT NULL,
PRIMARY KEY (id)
);
As explained in the documentation,
An integer or floating-point column can have the additional attribute
AUTO_INCREMENT. When you insert a value of NULL (recommended) or 0
into an indexed AUTO_INCREMENT column, the column is set to the next
sequence value. Typically this is value+1, where value is the largest
value for the column currently in the table. AUTO_INCREMENT sequences
begin with 1.
I suggest you to ommit the field completly when inserting new records.
You can then retrieve the last id inserted with LAST_INSERT_ID() SQL function (or the mysqli_insert_id function of PHP languagefor example).
But since it's not what you wanted, probably because of one of the reasons quoted from MarioZ's comment:
If you are already using auto-increment for the ID you can use it for
the order (that can be one reason). For auto-increment the column
must be set as primary and unique, can't be repeated values. The auto-increment is from the number in the record, if you inserted 10
rows and you delete 2, the next insert with auto-increment will be
11(if the last now is 8 you'd want it to be 9). Those are posible
reasons not to use it for what #Notflip wants :P
... You'll have to use PHP, with LOCK TABLE and UNLOCK TABLE SQL instructions before and after the retrieving of the last order then the updating of the new order, to avoid having simultaneous records with the same "order".
I am using MYSQL in my application development as my DB.
I want to clarify a thing.
Imagine There is a table called test.
Columns are col1,col2,col3,col4.
these columns have separate indexes. that mean 4 indexes.
I am inserting a record just to col1 and col2.
When you have a index in a column insert operation have a cost.
My question is. ----
So when I insert records only to one and two Do I have an affect from col3 and col4 ?
Will indexes will fire for every insert or will it fire if I do insert to those columns?
Let's get a basic fact straight: in an RDBMS there is no such thing that you insert a record for only a selected number fields in a table. If you insert a record, then all fields within that table will have a value for that record. That value may be a null value, but it is there.
Not to mention another fact, that columns may have non-null default values, so executing an insert that does not specify value for them will still result a non-null value to be stored.
Mysql indexes even null values, so if you have separate indexes for each column, then mysql has to update all indexes when a new record is inserted into the table, regardless how many fields are specifically assigned value within the insert.
I have a table with three columns ('xCoord', 'yCoord' and 'Total'). I want to increment the Total value if the x,y coordinate pair already exists, else I want to create a new row with the new x and y values with Total = 1.
Below is my best attempt so far - running the query for the first time adds a new line (as expected), running it a second time adds a new line instead of incrementing the previously created line though? Is there a way to perform this with action with a single query?
INSERT
INTO tbl_DATA_HeatmapValues (xCoord, yCoord, Total)
VALUES (11, 22, 1)
ON DUPLICATE KEY
UPDATE Total = Total + 1
Your query should work, but you need to have a unique index. The check, if a row already exists is based on the index, not the actual data in the row. No index, no checking if it exists, therefore inserting no matter what.
INSERT ... ON DUPLICATE KEY UPDATE performs an update only where the insert would cause a duplicate value(s) in a UNIQUE index or PRIMARY KEY, see this link for details: http://dev.mysql.com/doc/refman/5.6/en/insert-on-duplicate.html
Consider creating an unique index on xCoord, yCoord columns:
CREATE UNIQUE INDEX ON tbl_DATA_HeatmapValues (xCoord, yCoord)
I have an Access 2003 table with ~4000 records which was made from 17 different tables. Roughly half of these records are duplicates. There is no unique identifying column (id, name etc). There is an id column which was auto filled when the tables were combined meaning that the duplicates aren't completely identical (though this column could be removed if it makes things easier).
I have used the Access Find Duplicates Query Wizard which gives me a list of the duplicated records but won't let me delete them (seriously what use is this query if I can't delete them?). I've tried converting the generated query to a remove query but that changes the number of rows that it finds. I'd alter the sql by hand but it's a bit beyond me and is 7 lines long.
Does anyone know a good way of getting rid of the duplicates?
The reason the find duplicates query won't let you delete the records is because it is basically just an aggregate query, it is counting the number of duplicates it finds and returning the cases where the count is greater than 1.
Consider that if you did make a delete query based on the find duplicates, it would delete all rows that have duplicate values, which is maybe not what you want. You want to delete all but one of the duplicates.
You should try to delete all duplicates of a record apart from one, excluding the ID column in your comparison. I suggest the simplest way to do this is to make a make-table query of all the unique values (Select Distinct Field1, Field2... from MyTable) instead for every field except for the ID field, using the results in a to create a new table of around 2000 records (if half are duplicates).
Then, create an ID column on your new table, use an update query to update this ID to the first matching ID in the original table (you could do this using DLookup, which will return the first EXPRESSION value where CRITERIA is true in DOMAIN).
The DLookup() function returns one
value from a single field even if more
than one record satisfies the
criteria. If no record satisfies the
criteria, or if the domain contains no
records, DLookup() returns a Null.
Since you are identifying the first matching ID based on all the other fields, which are unique values, the unmatched IDs will belong to duplicates. You will be reversing the PK relation, identifying the first matching key given a set of unique fields. After that, you should set the ID to be PK. Of course this assumes the ID has no inherent meaning, and you don't care about keeping one particular ID for a given duplicated row over any of the IDs belonging to the other duplicated rows. This assumes you care about the data in the ID column so you want to preserve it for all remaining rows, otherwise just ignore the DLookup step and do a Select Distinct on all columns apart from the ID.
Use a select with all columns except the ID column:
SELECT DISTINCTROW Column1, Column2, Column3
INTO MYNEWTABLE
FROM TABLE
You can simply swap the names.
This solution will give you a new table with non duplicates.
The following will preserve original IDs and do it in one step:
DELETE FROM table_with_duplicates
WHERE table_with_duplicates.id NOT IN
(SELECT max(id)
FROM table_with_duplicates
GROUP BY duplicated_field_1, duplicated_field_2, ...
)
Now you have the original table with no duplicates and preserved ids.
And always remember to backup you data before trying large DELETEs.
DELETE * FROM table_with_duplicates
WHERE table_with_duplicates.ID In
(SELECT max(ID)
FROM table_with_duplicates
GROUP BY [duplicated_field_1]
HAVING Count(*)>1
)
Actually I Found A very simple solution took a while but it all of your fields across are the same like a complete duplicate record then just make one query with every field and sort by "Group BY". Thus the duplicates will combine and you can just append this information to a new table and rename it the same as the existing table. If you have a primary key field you could just ignore it in the query and then it would still combine the data (assuming that you don't care about the data in the primary field). I don't know why no one has mentioned this solution took me 5 hr. to come up with. :)