Auto Incremental serial number for MySQL View - mysql

Having a issue with my project need to insert an auto incremental value for my MySQL view, I would be nice if you guys help in solving this obstacle, Here is the code in which I wanna have auto incremental serial number (say S.No) as the first column.
CREATE
ALGORITHM = UNDEFINED
DEFINER = `srems_admin`#`localhost`
SQL SECURITY DEFINER
VIEW `emp_elec_consumption_view` AS
SELECT
`t1`.`PFNUMBER` AS `PFNUMBER`,
`emp`.`EMPNAME` AS `EMPNAME`,
`t1`.`MonthAndYear` AS `MonthAndYear`,
`qt`.`QTRSCODE` AS `QTRSCODE`,
`t1`.`UNITS_CONSUMED` AS `UNITS_CONSUMED`,
(`t2`.`FIXED_COMPONENT` + (`t1`.`UNITS_CONSUMED` * `t2`.`RATE_COMPONENT`)) AS `Amount`
FROM
(((`srems`.`mstqtroccu` `qt`
JOIN `srems`.`mstemp` `emp`)
JOIN `srems`.`msttariffrate` `t2`)
JOIN (SELECT
`srems`.`tranmeterreading`.`PFNUMBER` AS `PFNUMBER`,
(`srems`.`tranmeterreading`.`CLOSINGREADING` - `srems`.`tranmeterreading`.`OPENINGREADING`) AS `UNITS_CONSUMED`,
CONCAT(CONVERT( IF((LENGTH(MONTH(`srems`.`tranmeterreading`.`READINGDATE`)) > 1), MONTH(`srems`.`tranmeterreading`.`READINGDATE`), CONCAT('0', MONTH(`srems`.`tranmeterreading`.`READINGDATE`))) USING UTF8), '/', RIGHT(YEAR(`srems`.`tranmeterreading`.`READINGDATE`), 2)) AS `MonthAndYear`,
(SELECT
`t`.`TRANSACTIONID`
FROM
`srems`.`msttariffrate` `t`
WHERE
(`t`.`TORANGE` > (`srems`.`tranmeterreading`.`CLOSINGREADING` - `srems`.`tranmeterreading`.`OPENINGREADING`))
LIMIT 1) AS `tariffplanid`
FROM
`srems`.`tranmeterreading`) `t1`)
WHERE
((`t1`.`tariffplanid` = `t2`.`TRANSACTIONID`)
AND (`t1`.`PFNUMBER` = `qt`.`PFNUMBER`)
AND (`t1`.`PFNUMBER` = `emp`.`PFNUMBER`))
Pls insert the things at the correct place and post it as an comment to get S.No which should be auto-incremental starting from 1 and also it should be the first column, ty in advance

Your view has no chance of working in MySQL anyway so you might as well give up.
MySQL does not allow subqueries in the FROM clause. And your query is pretty complicated with lots of subqueries.
It also does not allow variables, so getting a row number is rather complicated.

Related

mysql Query performance is low

I have a query which is running for around 2 hours in last few days. But
before that it took only 2 to 3 minutes of time. i could not able to find
the reason for its sudden slowness. Can any one help me on this?
Please find the below query explain plan[![enter image description here][1]]
[1]...
select
IFNULL(EMAIL,'') as EMAIL,
IFNULL(SITE_CD,'') as SITE_CD,
IFNULL(OPT_TYPE_CD,'') as OPT_TYPE_CD,
IFNULL(OPT_IN_IND,'') as OPT_IN_IND,
IFNULL(EVENT_TSP,'') as EVENT_TSP,
IFNULL(APPLICATION,'') as APPLICATION
from (
SELECT newsletter_entry.email email,
newsletter.site_cd site_cd,
REPLACE (newsletter.TYPE, 'OPTIN_','') opt_type_cd,
CASE
WHEN newsletter_event_temp.post_status = 'SUBSCRIBED' THEN 'Y'
WHEN newsletter_event_temp.post_status = 'UNSUBSCRIBED' THEN
'N'
ELSE ''
END
opt_in_ind,
newsletter_event_temp.event_date event_tsp,
entry_context.application application
FROM amg_toolkit.newsletter_entry,
amg_toolkit.newsletter,
(select NEWSLETTER_EVENT.* from amg_toolkit.NEWSLETTER_EVENT,
amg_toolkit.entry_context where newsletter_event.EVENT_DATE >= '2017-07-11
00:01:23' AND newsletter_event.EVENT_DATE < '2017-07-11 01:01:23' and
newsletter_event.ENTRY_CONTEXT_ID = entry_context.ENTRY_CONTEXT_ID and
entry_context.APPLICATION != 'feedbackloop') newsletter_event_temp,
amg_toolkit.entry_context
WHERE newsletter_entry.newsletter_id = newsletter.newsletter_id
AND newsletter_entry.newsletter_entry_id =
newsletter_event_temp.newsletter_entry_id
AND newsletter.TYPE IN ('OPTIN_PRIM', 'OPTIN_THRD', 'OPTIN_WRLS')
AND newsletter_event_temp.entry_context_id NOT IN
(select d.ENTRY_CONTEXT_ID from amg_toolkit.sweepstake a,
amg_toolkit.sweepstake_entry b, amg_toolkit.user_entry c,
amg_toolkit.entry_context d where a.exclude_data = 'Y' and
a.sweepstake_id=b.sweepstake_id and b.USER_ENTRY_ID=c.USER_ENTRY_ID and
c.ENTRY_CONTEXT_ID = d.ENTRY_CONTEXT_ID)
AND newsletter_event_temp.entry_context_id =
entry_context.entry_context_id
AND newsletter_event_temp.event_date >= '2017-07-11 00:01:23'
AND newsletter_event_temp.event_date < '2017-07-11 01:01:23') a;`
[1]: https://i.stack.imgur.com/cgsS1.png
dont use .*
select only the columns of data you are using in your query.
Avoid nested sub selects if you dont need them.
I don't see a need for them in this query. You query the data 3 times this way instead of just once.
Slowness can be explained by an inefficient query haveing to deal with tables that have a growing number of records.
"Not in" is resource intensive. Can you do that in a better way avoiding "not in" logic?
JOINs are usually faster than subqueries. NOT IN ( SELECT ... ) can usually be turned into LEFT JOIN ... WHERE id IS NULL.
What is the a in a.exclude_data? Looks like a syntax error.
These indexes are likely to help:
newsletter_event: INDEX(ENTRY_CONTEXT_ID, EVENT_DATE) -- in this order
You also need it for newsetter_event_temp, but since that is not possible, something has to give. What version of MySQL are you running? Perhaps you could actually CREATE TEMPORARY TABLE and ADD INDEX.

MySQL Query gets too complex for me

I'm trying to write a MYSQL Query that updates a cell in table1 with information gathered from 2 other tables;
The gathering of data from the other 2 tables goes without much issues (it is slow, but that's because one of the 2 tables has 4601537 records in it.. (because all the rows for one report are split in a separate record, meaning that 1 report has more than 200 records)).
The Query that I use to Join the two tables together is:
# First Table, containing Report_ID's: RE
# Table that has to be updated: REGI
# Join Table: JT
SELECT JT.report_id as ReportID, REGI.Serienummer as SerialNo FROM Blancco_Registration.TrialTable as REGI
JOIN (SELECT RE.Value_string, RE.report_id
FROM Blancco_new.mc_report_Entry as RE
WHERE RE.path_id=92) AS JT ON JT.Value_string = REGI.Serienummer
WHERE REGI.HardwareType="PC" AND REGI.BlanccoReport=0 LIMIT 100
This returns 100 records (I limit it because the database is in use during work hours and I don't want to steal all resources).
However, I want to use these results in a Query that updates the REGI table (which it uses to select the 100 records in the first place).
However, I get the error that I cannot select from the table itself while updateing it (logically). So I tried selecting the select statement above into a temp table and than Update it; however, then I get the issue that I get to much results (logically! I only need 1 result and get 100) however, I'm getting stuck in my own thougts.. I ultimately need to fill the ReportID into each record of REGI.
I know it should be possible, but I'm no expert in MySQL.. is there anybody that can point me into the right direction?
Ps. fixing the table containing 400k records is not an option, it's a program from an external developer and I can only read that database.
The errors I'm talking about are as follows:
Error Code: 1093. You can't specify target table 'TrialTable' for update in FROM clause
When I use:
UPDATE TrialTable SET TrialTable.BlanccoReport =
(SELECT JT.report_id as ReportID, REGI.Serienummer as SerialNo FROM Blancco_Registration.TrialTable as REGI
JOIN (SELECT RE.Value_string, RE.report_id
FROM Blancco_new.mc_report_Entry as RE
WHERE RE.path_id=92) AS JT ON JT.Value_string = REGI.Serienummer
WHERE REGI.HardwareType="PC" AND REGI.BlanccoReport=0 LIMIT 100)
WHERE TrialTable.HardwareType="PC" AND TrialTable.BlanccoReport=0)
Then I tried:
UPDATE TrialTable SET TrialTable.BlanccoReport = (SELECT ReportID FROM (<<and the rest of the SQL>>> ) as x WHERE X.SerialNo = TrialTable.Serienummer)
but that gave me the following error:
Error Code: 1242. Subquery returns more than 1 row
Haveing the Query above with a LIMIT 1, gives everything the same result
Firstly, your query seems to be functionally identical to the following:
SELECT RE.report_id ReportID
, REGI.Serienummer SerialNo
FROM Blancco_Registration.TrialTable REGI
JOIN Blancco_new.mc_report_Entry RE
ON RE.Value_string = REGI.Serinummer
WHERE REGI.HardwareType = "PC"
AND REGI.BlanccoReport=0
AND RE.path_id=92
LIMIT 100
So, why not use that?
EDIT:
I still don't get it. I can't see what part of the problem the following fails to solve...
UPDATE TrialTable REGI
JOIN Blancco_new.mc_report_Entry RE
ON RE.Value_string = REGI.Serinummer
SET TrialTable.BlanccoReport = RE.report_id
WHERE REGI.HardwareType = "PC"
AND REGI.BlanccoReport=0
AND RE.path_id=92;
(This is not an answer, but maybe a pointer towards a few points that need further attention)
Your JT sub query looks suspicious to me:
(SELECT RE.Value_string, RE.report_id
FROM Blancco_new.mc_report_Entry as RE
WHERE RE.path_id=92
GROUP BY RE.report_id)
You use group by but don't actually use any aggregate functions. The column RE.Value_string should strictly be something like MAX(RE.Value_string) instead.

How to identify sequenced record gaps by field on MySQL

I can find sequenced record gaps where sequenced weeks with same numbers using following query.
SELECT * FROM pointed_numbers A WHERE EXISTS (
SELECT * FROM pointed_numbers B WHERE A.number = B.number AND (A.week = B.week + 1 XOR A.week = B.week - 1)
) ORDER BY A.number, A.week;
How can I identify each gaps without stored procedure. I have tried with user-defined variable but I had no success.
Take a look at http://www.artfulsoftware.com/infotree/queries.php and look at the stuff under the "sequences" section. This is a super super helpful site with recipes for how to do complicated things in mysql!

mysql - satisfy composite primary key while using 'insert into xxx select'

I am importing data to a table structured: content_id|user_id|count - all integers all comprise the composite primary key
The table I want to select it from is structured: content_id|user_id
For reasons quite specific to my use case, I will need to fire quite a lot of data into this regularly enough to want a pure MySQL solution
insert into new_db.table
select content_id,user_id,xxx from old_db.table
I want each row to go in with xxx set to 0, unless this would create a duplicate key, in which case I wish to increment the number, for the current user_id/content_id combination
Not being a MySQL expert, I tried a few options like trying to populate xxx by selecting from the target table during insert, with no luck. Also tried using ON DUPLICATE KEY to increment counters instead of the usual UPDATE. But it all seemed a bit daft so I thought I would come here!
Any ideas anyone? I have a backup option of wrapping this in PHP, but it would drastically raise the overall running time of the script in which this would be the only non-pure MySQL part
Any help really appreciated. thanks in advance!
--edit
this may sound really awful in principle. but id settle for a way to do it in an update after entering random numbers (i have sent in random numbers to allow me to continue other work at the moment) - and this is a purely dev setup
--edit again
12|234
51|45
51|45
51|45
23|67
would ideally insert
12|234|0
51|45|0
51|45|1
51|45|2
23|67|0
INSERT INTO new_db.table (content_id, user_id, cnt)
SELECT old.content_id, old.user_id, COUNT(old.*) - 1 FROM old_db.table old
GROUP BY old.content_id, old.user_id
this would be the way I would go, so if 1 entry it would put 0 on cnt, for more it would just put 1-2-3 etc.
Edit:
Your correct answer would be somewhat complicated but I tested it and it works:
INSERT INTO newtable(user_id,content_id,cnt)
SELECT o1.user_id, o1.content_id,
CASE
WHEN COALESCE(#rownum, 0) = 0
THEN #rownum:=c-1
ELSE #rownum:=#rownum-1
END as cnt
FROM
(SELECT user_id, content_id, COUNT(*) as c FROM oldtable
GROUP BY user_id, content_id ) as grpd
LEFT JOIN
(SELECT oldtable.* FROM oldtable) o1 ON
(o1.user_id = grpd.user_id AND o1.content_id = grpd.content_id)
;
Assuming that in the old db table (source), you will not have the same (content_id, user_id) combination, then you can import using this query
insert newdbtable
select o.content_id, o.user_id, ifnull(max(n.`count`),-1)+1
from olddbtable o
left join newdbtable n on n.content_id=o.content_id and n.user_id=o.user_id
group by o.content_id, o.user_id;

using ssis to perform operation with high performance

Im trying to make an operation of creating user network based on call detail records in my CDR table.
To make things simple lets say Ive got CDR table :
CDRid
UserAId
UserBId
there is more than 100 mln records so table is quite big.
I reated user2user table:
UserAId
UserBId
NumberOfConnections
then using curos I iterate through each row in the table, then I make select statement:
if in user2user table there is record which has UserAId = UserAId from CDR record and UserBId = UserBId from CDR record then increase NumberOfConnections.
otherwise insert such a row which NumebrOfConnections = 1.
Quite simple task and it works as I said using cursor but it is very bad in performance (estimated time at my computer ~60 h).
I heard about Sql Server Integration Services that it has got better performance when we are talking about such big tables.
Problem is that I have no idea how to customize SSIS package for creating such task.
If anyone has got any idea how to help me, any good resources etc I would be really thankful.
Maybe there is any other good solution to make it work faster. I used indexes and variable tables and so on and performance is still pure.
thanks for help,
P.S.
This is script which I wrote and execution of this takes sth like 40 - 50 h.
DECLARE CDR_cursor CURSOR FOR
SELECT CDRId, SubscriberAId, BNumber
FROM dbo.CDR
OPEN CDR_cursor;
FETCH NEXT FROM CDR_cursor
INTO #CdrId, #SubscriberAId, #BNumber;
WHILE ##FETCH_STATUS = 0
BEGIN
--here I check if there is a user with this number (Cause in CDR i only have SubscriberAId --and BNumber so that I need to check which one user is this (I only have users from
--network so that each time I cant find this user I add one which is outide network)
SELECT #UserBId = (Select UserID from dbo.Number where Number = #BNumber)
IF (#UserBId is NULL)
BEGIN
INSERT INTO dbo.[User] (ID, Marked, InNetwork)
VALUES (#OutUserId, 0, 0);
INSERT into dbo.[Number](Number, UserId) values (#BNumber, #OutUserId);
INSERT INTO dbo.User2User
VALUES (#SubscriberAId, #OutUserId, 1)
SET #OutUserId = #OutUserId - 1;
END
else
BEGIN
UPDATE dbo.User2User
SET NumberOfConnections = NumberOfConnections + 1
WHERE User1ID = #SubscriberAId AND User2ID = #UserBId
-- Insert the row if the UPDATE statement failed.
if(##ROWCOUNT = 0)
BEGIN
INSERT INTO dbo.User2User
VALUES (#SubscriberAId, #UserBId, 1)
END
END
SET #Counter = #Counter + 1;
if((#Counter % 100000) = 0)
BEGIN
PRINT Cast (#Counter as NVarchar(12));
END
FETCH NEXT FROM CDR_cursor
INTO #CdrId, #SubscriberAId, #BNumber;
END
CLOSE CDR_cursor;
DEALLOCATE CDR_cursor;
The thing about SSIS is that it probably won't be much faster than a cursor. It's pretty much doing the same thing: reading the table record by record, processing the record and then moving to the next one. There are some advanced techniques in SSIS like sharding the data input that will help if you have heavy duty hardware, but without that it's going to be pretty slow.
A better solution would be to write an INSERT and an UPDATE statement that will give you what you want. With that you'll be better able to take advantage of indices on the database. They would look something like:
WITH SummaryCDR AS (UserAId, UserBId, Conns) AS
(
SELECT UserAId, UserBId, COUNT(1) FROM CDR
GROUP BY UserAId, UserBId)
UPDATE user2user
SET NumberOfConnections = NumberOfConnections + SummaryCDR.Conns
FROM SummaryCDR
WHERE SummaryCDR.UserAId = user2user.UserAId
AND SummaryCDR.UserBId = user2user.UserBId
INSERT INTO user2user (UserAId, UserBId, NumberOfConnections)
SELECT CDR.UserAId, CDR.UserBId, Count(1)
FROM CDR
LEFT OUTER JOIN user2user
ON user2user.UserAId = CDR.UserAId
AND user2user.UserBId = CDR.UserBId
WHERE user2user.UserAId IS NULL
GROUP BY CDR.UserAId, CDR.UserBId
(NB: I don't have time to test this code, you'll have to debug it yourself)
is this what you need?
select
UserAId, UserBId, count(CDRid) as count_connections
from cdr
group by UserAId, UserBId
Could you break the conditional update/insert into two separate statements and get rid of the cursor?
Do the INSERT for all the NULL rows and the UPDATE for all the NOT NULL rows.
Why are you even considering doing row-by-row processing on a table that size? You know you can use the merge statment and insert or update and it will be faster. Or you could write an update to insert all rows that need updating in one set-based stament and an insert to insert alll rows when the row doesn't exist in one set-based statement.
Stop using the values clause and use an insert with joins instead. Same thing with updates. If you need extra complexity the case stamenet will probably give you all you need.
In general stop thinking of row-by-row processing. If you can write a select for the cursor, you can write a set-based statement to do the work 99.9% of the time.
You may still want a cursor with a table this large but one to process batches of data (for instance a 1000 records at time) not one to run ro-by-row.