I currently have a table (AllProducts) which contains product information. It has 16 columns and approximately 125000 rows.
I need to create a unique value in the database, as there is no unique value present in the table. I can not use the auto increment feature as my database gets emptied out and filled again on a daily basis (and thus id's for specific products will change).
I want to use a varchar field (url) to be a unique value. In order to do this I created a view (AllProductsCategories) which makes sure the combination of url and shop is unique.
select min(`a`.`insertionTime`) AS `insertionTime`,
`a`.`shop` AS `shop`,
min(`a`.`name`) AS `name`,
min(`a`.`category`) AS `category`,
max(`a`.`description`) AS `description`,
min(`a`.`price`) AS `price`,
`a`.`url` AS `url`,
avg(`a`.`image`) AS `image`,
min(`a`.`fromPrice`) AS `fromPrice`,
min(`a`.`deliveryCosts`) AS `deliveryCosts`,
max(`a`.`stock`) AS `stock`,
max(`a`.`deliveryTime`) AS `deliveryTime`,
max(`a`.`ean`) AS `ean`,
max(`a`.`color`) AS `color`,
max(`a`.`size`) AS `size`,max(`a`.`brand`) AS `brand`
from `AllProducts` `a` group by `a`.`url`,`a`.`shop`
order by NULL
This works fine but is quite slow. The query below takes 51 seconds to complete:
SELECT * FROM ProductsCategories ORDER BY NULL LIMIT 50
I am quite new to MySQL and experimented by indexing the following columns: category, name, url, shop and shop/url.
Now my questions:
1) Is this the correct approach if I want to ensure that the url field is unique? I currently use a group by to merge all info about one url. An alternative approach could be to delete duplicates (not sure how to do this though).
2) If the current approach is OK, how can I speed up this process?
If the data is re-loaded every day, then you should just fix it when it is reloaded.
Perhaps that is not possible. I would suggest the following approach, assuming that the triple url, shop, InsertionTime is unique. First, build an index on url, shop, InsertionTime. Then use this query:
select ap.*
from AllProducts ap
where ap.InsertionTime = (select InsertionTime
from AllProducts ap2
where ap2.url = ap.url and
ap2.shop = ap.shop
order by InsertionTime
limit 1
);
MySQL does not allow subqueries in the from clause of a view. It does allow them in the select and where (and having) clauses. This should cycle through the table, doing an index lookup for each row, just returning the ones that have the minimum insertion time.
Related
I'm working on a table counting around 40,000,000 rows, and I'm trying to extract first entry for each "subscription_id" (foreign key from another table), here is my acutal request:
SELECT * FROM billing bill WHERE bill.billing_value not like 'not_ok%'
AND
(SELECT bill2.billing_id
FROM billing bill2
WHERE bill2.subscription_id = bill.subscription_id
ORDER BY bill2.billing_id ASC LIMIT 1
)= bill.billing_id;
This request is working correctly, when I put a small limit on it, but I cannot seem to process it for all the database.
Is there a way I could optimise it somehow ? Or do things in an other way ?
Table indexes and structure:
Indexes:
This is an example of the ROW_NUMBER() solution mentioned in the comments above.
select *
from (
select *, row_number() over (partition by subscription_id order by billing_id) as rownum
from billing
where billing_value not like 'not_ok%'
) t
where rownum = 1;
The ROW_NUMBER() function is available in MySQL 8.0, so if you haven't upgraded yet, you must do so to use this function.
Unfortunately, this won't be much of an improvement, because the NOT LIKE causes a table-scan regardless of the pattern you search for.
I believe it requires a virtual column with an index to optimize that condition:
alter table billing
add column ok as tinyint(1) as (billing_value not like 'not_ok%'),
add index (ok);
select *
from (
select *, row_number() over (partition by subscription_id order by billing_id) as rownum
from billing
where ok = true
) t
where rownum = 1;
Now it will use the index on the ok virtual column to reduce the set of examined rows.
This still might be a costly query on a 40 million row table, because the derived table subquery creates a large temporary table. If it's not fast enough, you'll have to really reconsider how you store and query this data.
For example, adding a column first_ok with an index, which is true only on the rows you need to fetch (the first row per subscriber_id without 'not_ok' as the billing value). But you must maintain this new column manually, and risk it being wrong if you don't do that. This is a denormalized design, but tailored to the query you want to run.
I haven't tried it, because I don't have an MySQL DB at hand, but this query seems much simpler:
select *
from billing
where billing_id in (select min(billing_id)
from billing
group by subscription_id)
and billing_value not like 'not_ok%';
The inner select get the minimum billing_id for all subscriptions. The outer gets the rest of the billing record.
If performance is an issue, I'd add the billing_id field in the third index, so you get an index with (subscription_id,billing_id). This will help for the inner query.
I want to remove duplicates based on the combination of listings.product_id and listings.channel_listing_id
This simple query returns 400.000 rows (the id's of the rows I want to keep):
SELECT id
FROM `listings`
WHERE is_verified = 0
GROUP BY product_id, channel_listing_id
While this variation returns 1.600.000 rows, which are all records on the table, not only is_verified = 0:
SELECT *
FROM (
SELECT id
FROM `listings`
WHERE is_verified = 0
GROUP BY product_id, channel_listing_id
) AS keepem
I'd expect them to return the same amount of rows.
What's the reason for this? How can I avoid it (in order to use the subselect in the where condition of the DELETE statement)?
EDIT: I found that doing a SELECT DISTINCT in the outer SELECT "fixes" it (it returns 400.000 records as it should). I'm still not sure if I should trust this subquery, for there is no DISTINCT in the DELETE statement.
EDIT 2: Seems to be just a bug in the way phpMyAdmin reports the total count of the rows.
Your query as it stands is ambiguous. Suppose you have two listings with the same product_id and channel_id. Then what id is supposed to be returned? The first, the second? Or both, ignoring the GROUP request?
What if there is more than one id with different product and channel ids?
Try removing the ambiguity by selecting MAX(id) AS id and adding DISTINCT.
Are there any foreign keys to worry about? If not, you could pour the original table into a copy, empty the original and copy back in it the non-duplicates only. Messier, but you only do SELECTs or DELETEs guaranteed to succeed, and you also get to keep a backup.
Assign aliases in order to avoid field reference ambiguity:
SELECT
keepem.*
FROM
(
SELECT
innerStat.id
FROM
`listings` AS innerStat
WHERE
innerStat.is_verified = 0
GROUP BY
innerStat.product_id,
innerStat.channel_listing_id
) AS keepem
I need to know most effective way of deleting duplicated rows from very large table, (over 1 billion rows in this table) so i need to know a very efficient way of doing this as this may take days if i execute a ineffective query.
I need to delete all duplicate urls in the search table,
i.e
DELETE FROM search WHERE (url) NOT IN
(
SELECT url FROM
(
SELECT url FROM search GROUP BY url
) X
);
Depends entirely on your indexes. Do this in two steps: (1) create the highest-selectivity indexes your DBMS supports on the URL field combined with any other field that can distinguish records with the same URL, such as a primary key or time stamp field; (2) write procedural code (not just a query) to process a small fraction if the records at a time and commit results in these small batches, e.g. sliced by PK mod 1000, or the 3 characters of the URL preceding the .TLD part.
This is the best way to have a predictable result, unless you are sure the DB process won't run out of memory, log file space etc. during the long cycle of deletes a straight query would require.
DELETE from search
where id not in (
select min(id) from search
group by url
having count(*)=1
union
SELECT min(id) FROM search
group by url
having count(*) > 1
)
I have a table with call records. Each call has a 'state' CALLSTART and CALLEND, and each call has a unique 'callid'. Also for each record there is a unique autoincrement 'id'. Each row has a MySQL TIMESTAMP field.
In a previous question I asked for a way to calculate the total of seconds of phone calls. This came to this SQL:
SELECT SUM(TIME_TO_SEC(differences))
FROM
(
SELECT SEC_TO_TIME(TIMESTAMPDIFF(SECOND,MIN(timestamp),MAX(timestamp)))as differences
FROM table
GROUP BY callid
)x
Now I would like to know how to do this, only for callid's that also have a row with the state CONNECTED.
Screenshot of table: http://imgur.com/gmdeSaY
Use a having clause:
SELECT SUM(difference)
FROM (SELECT callid, TIMESTAMPDIFF(SECOND, MIN(timestamp), MAX(timestamp)) as difference
FROM table
GROUP BY callid
HAVING SUM(state = 'Connected') > 0
) c;
If you only want the difference in seconds, I simplified the calculation a bit.
EDIT: (for Mihai)
If you put in:
HAVING state in ('Connected')
Then the value of state comes from an arbitrary row for each callid. Not all the rows, just an arbitrary one. You might or might not get lucky. As a general rule, avoid using the MySQL extension that allows "bare" columns in the select and having clauses, unless you really use the feature intentionally and carefully.
SELECT DISTINCT `Stock`.`ProductNumber`,`Stock`.`Description`,`TComponent_Status`.`component`, `TComponent_Status`.`certificate`,`TComponent_Status`.`status`,`TComponent_Status`.`date_created`
FROM Stock , TBOM , TComponent_Status
WHERE `TBOM`.`Component` = `TComponent_Status`.`component`
AND `Stock`.`ProductNumber` = `TBOM`.`Product`
Basically table TBOM HAS :
24,588,820 rows
The query is ridiculously slow, i'm not too sure what i can do to make it better. I have indexed all the other tables in the query but TBOM has a few duplicates in the columns so i can't even run that command. I'm a little baffled.
To start, index the following fields:
TBOM.Component
TBOM.Product
TComponent_Status.component
Stock.ProductNumber
Not all of the above indexes may be necessary (e.g., the last two), but it is a good start.
Also, remove the DISTINCT if you don't absolutely need it.
The only thing I can really think of is having an index on your Stock table on
(ProductNumber, Description)
This can help in two ways. Since you are only using those two fields in the query, the engine wont be required to go to the full data row of each stock record since both parts are in the index, it can use that. Additionally, you are doing DISTINCT, so having the index available to help optimize the DISTINCTness, should also help.
Now, the other issue for time. Since you are doing a distinct from stock to product to product status, you are asking for all 24 million TBOM items (assume bill of materials), and each BOM component could have multiple status created, you are getting every BOM for EVERY component changed.
If what you are really looking for is something like the most recent change of any component item, you might want to do it in reverse... Something like...
SELECT DISTINCT
Stock.ProductNumber,
Stock.Description,
JustThese.component,
JustThese.certificate,
JustThese.`status`,
JustThese.date_created
FROM
( select DISTINCT
TCS.Component,
TCS.Certificate,
TCS.`staus`,
TCS.date_created
from
TComponent_Status TCS
where
TCS.date_created >= 'some date you want to limit based upon' ) as JustThese
JOIN TBOM
on JustThese.Component = TBOM.Component
JOIN Stock
on TBOM.Product = Stock.Product
If this is a case, I would ensure an index on the component status table, something like
( date_created, component, certificate, status, date_created ) as the index. This way, the WHERE clause would be optimized, and distinct would be too since pieces already part of the index.
But, how you currently have it, if you have 10 TBOM entries for a single "component", and that component has 100 changes, you now have 10 * 100 or 1,000 entries in your result set. Take this and span 24 million, and its definitely not going to look good.