Due to a bug in my javascript click handling, multiple Location objects are posted to a JSON array that is sent to the server. I think I know how to fix that bug, but I'd also like to implement a server side database duplicate erase function. However, I'm not sure how to write this query.
The only affected table is laid out as
+----+------------+--------+
| ID | locationID | linkID |
+----+------------+--------+
| 64 | 13 | 14 |
| 65 | 14 | 13 |
| 66 | 14 | 15 |
| 67 | 15 | 14 |
| 68 | 15 | 16 |
| 69 | 16 | 17 |
| 70 | 16 | 14 |
| 71 | 17 | 16 |
| 72 | 17 | 16 |
| 73 | 17 | 16 |
| 74 | 17 | 16 |
| 75 | 17 | 16 |
| 76 | 17 | 16 |
| 77 | 17 | 16 |
+----+------------+--------+
As you can see, I have multiple pairs of (17, 16), while 14 has two pairs of (14, 13) and (14, 15). How can I delete all but one record of any duplicate entries?
Don't implement post factum correction logic, put a unique index on the fields that need to be unique, that way the database will stop dupe inserts before it's too late.
If you're using MySQL 5.1 or higher you can remove dupes and create a unique index in 1 command:
ALTER IGNORE TABLE 'YOURTABLE'
ADD UNIQUE INDEX somefancynamefortheindex (locationID, linkID)
You can create a temporary table where you can store the distinct records and then truncate the original table and insert data from temp table.
CREATE TEMPORARY TABLE temp_table (locationId INT,linkID INT)
INSERT INTO temp_table (locationId,linkId) SELECT DISTINCT locationId,linkId FROM table1;
DELETE from table1;
INSERT INTO table1 (locationId,linkId) SELECT * FROM temp_table ;
delete from tbl
using tbl,tbl t2
where tbl.locationID=t2.locationID
and tbl.linkID=t2.linkID
and tbl.ID>t2.ID
I am assuming you don't mean for the clean up, but for the new check? Put a unique index on if possible, if you don't have control of the DB do an upsert and check for nulls instead of an insert.
Related
I used to think that SQL cannot process unstructured data (like text) unless we write some user-defined functions in C. However, InnoDB's FullText Search feature seems did much of such work already.
According to https://dev.mysql.com/doc/refman/5.6/en/innodb-fulltext-index.html, the index is saved in InnoDB tables named FTS_00000..._00000..._INDEX_?.
I tried to run SELECT * FROM FTS_00000..._00000..._INDEX_1, in the hope to see tokens in each document (perhaps with stopwords removed already). However, I got an error message
ERROR 1146 (42S02): Table 'tf.FTS_0000000000000028_0000000000000030_INDEX_1' doesn't exist
even if select * from information_schema.INNODB_SYS_TABLES; reveals that the table exists.
Does anyone know how I could get the tokens of each document I inserted into the full-text index? It would be great if I can get the information in the following data schema:
token_id document_id count
"apple" 103343 3
"orange" 9593 1
...
Just because InnoDB uses a table as an internal data structure doesn't mean you have access to query those FTS tables with SQL statements. They don't appear in INFORMATION_SCHEMA.TABLES.
After creating the table opening_lines which is the example given in that manual page, I see this:
mysql> SELECT table_id, name, space FROM INFORMATION_SCHEMA.INNODB_SYS_TABLES
-> WHERE name LIKE 'test/%';
+----------+----------------------------------------------------+-------+
| table_id | name | space |
+----------+----------------------------------------------------+-------+
| 52 | test/FTS_000000000000002e_0000000000000085_INDEX_1 | 36 |
| 53 | test/FTS_000000000000002e_0000000000000085_INDEX_2 | 37 |
| 54 | test/FTS_000000000000002e_0000000000000085_INDEX_3 | 38 |
| 55 | test/FTS_000000000000002e_0000000000000085_INDEX_4 | 39 |
| 56 | test/FTS_000000000000002e_0000000000000085_INDEX_5 | 40 |
| 57 | test/FTS_000000000000002e_0000000000000085_INDEX_6 | 41 |
| 47 | test/FTS_000000000000002e_BEING_DELETED | 31 |
| 48 | test/FTS_000000000000002e_BEING_DELETED_CACHE | 32 |
| 49 | test/FTS_000000000000002e_CONFIG | 33 |
| 50 | test/FTS_000000000000002e_DELETED | 34 |
| 51 | test/FTS_000000000000002e_DELETED_CACHE | 35 |
| 46 | test/opening_lines | 30 |
+----------+----------------------------------------------------+-------+
12 rows in set (0.00 sec)
mysql> SELECT TABLE_NAME FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA='test';
+---------------+
| TABLE_NAME |
+---------------+
| opening_lines |
+---------------+
1 rows in set (0.00 sec)
As far as I know, there's no way to query the FTS tables directly at all. They are only for InnoDB's internal implementation of fulltext indexing.
I need help on a small problem with a subtraction in the same table and column
Well, iam creating a view, but the aplication generated the results of used time in tha same table and column.
My table have the following columns: id,field_id,object_id and value_date.
| ID | FIELD_ID | OBJECT_ID | VALUE_DATE |
| 55 | 4 | 33 | 2016-12-18 19:02:00 |
| 56 | 5 | 33 | 2016-12-18 19:12:00 |
| 57 | 4 | 35 | 2016-12-18 19:30:00 |
| 58 | 5 | 35 | 2016-12-18 20:00:00 |
I do not have much knowledge in sql, but i have tried some functions like timestampdiff, period_siff and others examples in stackoverflow.com.
Someone help me to subtract ID 56 with field_id 5 by line with ID 55 and field_id 4 in object_id 33 in SQL to bring the result in minutes. Ex: 10 or 00:10:00
An article about this problem would already help me. Thank you very much!
Lets assume that you want result to be in day format then query will be :
SELECT DATEDIFF(day,startDate,endDate) AS 'Day'
FROM table1;
Find complete example here
The soluction is below:
select TIMESTAMPDIFF(MINUTE,F1.value_date,F2.value_date) as minutes, F1.value_date,F2.value_date,F1.object_id,F2.object_id,F1.field_id,F2.field_id
from otrs_tst.dynamic_field_value F1
join otrs_tst.dynamic_field_value F2 on F1.object_id = F2.object_id
where F1.field_id in ('4','5')
and F2.field_id in ('4','5')
and F2.field_id <> F1.field_id
and F1.field_id < F2.field_id
group by F1.object_id,F2.field_id
I have a table that I am using as a temporary table. A cron runs every hour to set a certain value for each row.
| id | item_id | value |
+====+=========+=======+
| 1 | 5 | 52 |
| 2 | 34 | 314 |
| 3 | 27 | 189 |
| 4 | 19 | 200 |
+====+=========+=======+
What I would like to know is if it is better to first TRUNCATE and then refill this table or that I could rather SELECT the existing row, UPDATE it or INSERT it if it doesn't exist.
Insert the record if it doesn't exist in your temporary table and if it has already in your temporary table but you need to update it's value then update the specific record by only target it.
It would be more wise, because it will be reduce the operation execution time.
This question already has answers here:
SQL select only rows with max value on a column [duplicate]
(27 answers)
Closed 6 years ago.
I have a MySQL database that contains the table, "message_route". This table tracks the the path between hubs a message from a device takes before it finds a modem and goes out to the internet.
"message_route" contains the following columns:
id, summary_id, device_id, hub_address, hop_count, event_time
Each row in the table represents a single "hop" between two hubs. The column "device_id" gives the id of the device the message originated from. The column "hub_address" gives the id of the hub the message hop was received by, and "hop_count" counts these hops incrementally. The full route of the message is bound together by the "summary_id" key. A snippet of the table to illustrate:
+-----+------------+-----------+-------------+-----------+---------------------+
| id | summary_id | device_id | hub_address | hop_count | event_time |
+-----+------------+-----------+-------------+-----------+---------------------+
| 180 | 158 | 1099 | 31527 | 1 | 2011-10-01 04:50:53 |
| 181 | 159 | 1676 | 51778 | 1 | 2011-10-01 00:12:04 |
| 182 | 159 | 1676 | 43567 | 2 | 2011-10-01 00:12:04 |
| 183 | 159 | 1676 | 33805 | 3 | 2011-10-01 00:12:04 |
| 184 | 160 | 2326 | 37575 | 1 | 2011-10-01 00:12:07 |
| 185 | 160 | 2326 | 48024 | 2 | 2011-10-01 00:12:07 |
| 186 | 160 | 2326 | 57652 | 3 | 2011-10-01 00:12:07 |
+-----+------------+-----------+-------------+-----------+---------------------+
There are three total messages here. The message with summary_id = 158 touched only one hub before finding a modem, so row with id = 180 is the entire record of that message. Summary_ids 159 and 160 each have 3 hops, each touching 3 different hubs. There is no upward limit of the number of hops a message can have.
I need to create a MySQL query that gives me a list of the unique "hub_address" values that constitute the last hop of a message. In other words, the hub_address associated with the maximum hop_count for each summary_id. With the database snippet above, the output should be "31527, 33805, 57652".
I have been unable to figure this out. In the meantime, I am using this code as a proxy, which only gives me the unique hub_address values for messages with a single hop, such as summary_id = 158.
SELECT DISTINCT(x.hub_address)
FROM (SELECT hub_address, COUNT(summary_id) AS freq
FROM message_route GROUP BY summary_id) AS x
WHERE x.freq = 1;
I would approach this as:
select distinct mr.hub_address
from message_route mr
where mr.event_time = (select max(mr2.event_time)
from message_route mr2
where mr2.summary_id = mr.summary_id
);
I have two tables structure like below
Table1
Serial | Src | Albumid(primarykey)
________|__________________|________
1 | /root/wewe.jpg | 20
2 | /root/wewe.jpg | 21
3 | /root/wewe.jpg | 21
4 | /root/wewe.jpg | 23
5 | /root/wewe.jpg | 18
Table2
Albumid | Albumname | AlbumCover //albumid is secondary key ref. to first table
________|__________________|________
20 | AAA | null
21 | bbb | null
31 | vcc | null
42 | ddd | null
18 | eee | null
I followed this POST two update my Albumcover in Table2 using Serial no. of first table..
create proc AddCover #Serial int
as
Begin
update Table1 set albumcover='somthing' where table1.serial = #Serial
end
Can i do like this using foregin key constraint??
You'll need to do the update on Table2. To tell it to have a condition based on values from table1, check this post for examples:
MySQL - UPDATE query based on SELECT Query