Mysql check if column in a row is contained in another column? - mysql

I'm trying to make a mysql query that checks if a column in a row is contained within another column in the same row. Is there a way to do that kinda of query?
for example:
Key Value runHash
2500 tacos night.2500.293849284
1775 windows day.176555.43035842
I am trying to write a query that will return the second row and not the first because for the first row, Key is in runHash.
I tried to do:
select * from table where key not in runHash
However this doesn't appear to be valid for mysql.

You are looking for like:
where runHash like concat('%', key, '%')
You can put periods in the pattern as well, if those are important for your pattern matching.

Related

MYSQL query table using column position, NOT column name

Is it possible to write a query like the one below?
UPDATE sale SET sale_order='123456789' WHERE **COLUMN_1** = 2
where I don't explicitly pass the column name? Only its position?
I could get the column names but I am trying to avoid querying the database only to get them.
Thanks.
To answer your question, no, there is no syntax in SQL to reference the column by its position. This goes back to relational theory, in the sense that a table is a set of columns, and members of a set are unordered.
You will either have to know the column name, or else query it from the database:
SELECT COLUMN_NAME FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_SCHEMA=SCHEMA() AND TABLE_NAME='sale'
AND ORDINAL_POSITION=1;
It looks like you are trying to design a query that updates a row by primary key, by assuming the first column is the primary key. The primary key isn't necessarily the first column. It isn't necessarily an integer. It isn't necessarily a single column.
So you are already making assumptions about the table definition. You might as well assume the primary key column is named id or some other convention.

Compare a large wordlist for matches in a mysql table of valid words

I have a large wordlist like this:
('Mill','Test','Report','Stainless','Pipe','Commodity','Steel','Welded','Pipe','Customer','Destination','Los Angeles','Specification','Country','Grade','Customer', ...)
And a MySQL table of words with one column ("word").
How can I pass my wordlist into the query and have returned those words that:
exist (at least once) in the table
don't exist in the table
NOTE:
SELECT word FROM englishwords WHERE word IN('Mill','Test','Report','Stainless','Pipe','Commodity','Steel','Welded','Pipe','Customer','Destination','Los Angeles','Specification','Country','Grade','Customer') GROUP BY word
This works for returning the words that EXIST in the table.
I suppose I could array_diff the before and after, but I was hoping there would be a way to do it simply with one query.
I'll try #barmar recommendation to just create temporary tables for each operation, too.

mysql how to update a table by auto-incrementing a column

A MySql 5.3 table with 100K rows has a primary key.
There is also an integer column which is not part of the key. I would like to update this column to contain a unique number for the table. E.g. for the first record it should contain 1, for the second 2 etc.
This could as well be an auto-increment column, but MySql does not allow auto-increment on non-key columns. I don't want this column to be part of the key, because of the way it gets populated from a file etc.
So how such a query would look like?
I don't know why do you want to do something like this, but a possible solution is this:
set #rownum:=0;
update <table> set column = #rownum:=rownum+1 order by <field>

making a database in mysql without duplicate values

I have a tab separated text file in the format
id | field 1 | field 2 ...
I want to insert this into a mysql database with id as the primary key but the text file may contain duplicate id's .
How to make sure that there's just one entry corresponding to each id.
How to make a choice between two lines having the same id (Yes, they might not be consistent, but it's okay to choose one over other like the first or the last occurrence )
Read line by line from text file, parse that line and use INSERT ... ON DUPLICATE KEY UPDATE Syntax.
I would do a SELECT before INSERT and count the number of rows returned by the SELECT. Something like this:
SELECT * FROM yourTable WHERE yourTable.id = :id
If that returns any row, don't insert and go to next. Otherwise insert it.
Edit: This would be a post strategy. It would be good if you could add a Unique Constraint to guarantee uniqueness. Something like:
ALTER TABLE yourTable ADD CONSTRAINT ukID UNIQUE (id)
Presuming a Unix shell, I'd do this:
awk '!x[$1]++' inputfile.tsv > uniqfile.tsv
then do your import off of the uniqfile.
edit: to be clear, that script uniq's the input file based on the first field by only outputting rows that do not already have a non-zero value in a hash keyed off of the first field.

Deleting Duplicates in Access 2003

I have an Access 2003 table with ~4000 records which was made from 17 different tables. Roughly half of these records are duplicates. There is no unique identifying column (id, name etc). There is an id column which was auto filled when the tables were combined meaning that the duplicates aren't completely identical (though this column could be removed if it makes things easier).
I have used the Access Find Duplicates Query Wizard which gives me a list of the duplicated records but won't let me delete them (seriously what use is this query if I can't delete them?). I've tried converting the generated query to a remove query but that changes the number of rows that it finds. I'd alter the sql by hand but it's a bit beyond me and is 7 lines long.
Does anyone know a good way of getting rid of the duplicates?
The reason the find duplicates query won't let you delete the records is because it is basically just an aggregate query, it is counting the number of duplicates it finds and returning the cases where the count is greater than 1.
Consider that if you did make a delete query based on the find duplicates, it would delete all rows that have duplicate values, which is maybe not what you want. You want to delete all but one of the duplicates.
You should try to delete all duplicates of a record apart from one, excluding the ID column in your comparison. I suggest the simplest way to do this is to make a make-table query of all the unique values (Select Distinct Field1, Field2... from MyTable) instead for every field except for the ID field, using the results in a to create a new table of around 2000 records (if half are duplicates).
Then, create an ID column on your new table, use an update query to update this ID to the first matching ID in the original table (you could do this using DLookup, which will return the first EXPRESSION value where CRITERIA is true in DOMAIN).
The DLookup() function returns one
value from a single field even if more
than one record satisfies the
criteria. If no record satisfies the
criteria, or if the domain contains no
records, DLookup() returns a Null.
Since you are identifying the first matching ID based on all the other fields, which are unique values, the unmatched IDs will belong to duplicates. You will be reversing the PK relation, identifying the first matching key given a set of unique fields. After that, you should set the ID to be PK. Of course this assumes the ID has no inherent meaning, and you don't care about keeping one particular ID for a given duplicated row over any of the IDs belonging to the other duplicated rows. This assumes you care about the data in the ID column so you want to preserve it for all remaining rows, otherwise just ignore the DLookup step and do a Select Distinct on all columns apart from the ID.
Use a select with all columns except the ID column:
SELECT DISTINCTROW Column1, Column2, Column3
INTO MYNEWTABLE
FROM TABLE
You can simply swap the names.
This solution will give you a new table with non duplicates.
The following will preserve original IDs and do it in one step:
DELETE FROM table_with_duplicates
WHERE table_with_duplicates.id NOT IN
(SELECT max(id)
FROM table_with_duplicates
GROUP BY duplicated_field_1, duplicated_field_2, ...
)
Now you have the original table with no duplicates and preserved ids.
And always remember to backup you data before trying large DELETEs.
DELETE * FROM table_with_duplicates
WHERE table_with_duplicates.ID In
(SELECT max(ID)
FROM table_with_duplicates
GROUP BY [duplicated_field_1]
HAVING Count(*)>1
)
Actually I Found A very simple solution took a while but it all of your fields across are the same like a complete duplicate record then just make one query with every field and sort by "Group BY". Thus the duplicates will combine and you can just append this information to a new table and rename it the same as the existing table. If you have a primary key field you could just ignore it in the query and then it would still combine the data (assuming that you don't care about the data in the primary field). I don't know why no one has mentioned this solution took me 5 hr. to come up with. :)