I want to insert a table record into another table. I am selecting user id ,date and variance. When i insert the data of one user it works fine but when i insert multiple records it gives me an error of SQL Error [1292] [22001]: Data truncation: Truncated incorrect time value: '841:52:24.000000'.
insert into
features.Daily_variance_of_time_between_calls(
uId,
date,
varianceBetweenCalls)
SELECT
table_test.uid as uId,
SUBSTRING(table_test.date, 1, 10) as date ,
VARIANCE(table_test.DurationSinceLastCall) as varianceBetweenCalls #calculating the vairiance of inter-event call time
FROM
(SELECT
id,m.uid, m.date,
TIME_TO_SEC(
timediff(m.date,
COALESCE(
(SELECT p.date FROM creditfix.call_logs AS p
WHERE
p.uid = m.uid
AND
p.`type` in (1,2)
AND
(p.id < m.id AND p.date < m.date )
ORDER BY m.date DESC, p.duration
DESC LIMIT 1 ), m.date))
) AS DurationSinceLastCall,
COUNT(1)
FROM
(select distinct id, duration, date,uid from creditfix.call_logs as cl ) AS m
WHERE
m.uId is not NULL
AND
m.duration > 0
# AND
# m.uId=171
GROUP BY 1,2
) table_test
GROUP BY 1,2
If i remove the comment it works fine for one specific user.
Let's start with the error message:
Data truncation: Truncated incorrect time value: '841:52:24.000000'
This message suggests that at some stage MySQL is running into a value which it cannot convert to a date/time/datetime. Efforts in isolating the issue should therefore begin with a focus on where values are being converted to those data types.
Without knowing the data types of all the fields used, it's difficult to say where the problem is likely to be. However, once we knew that the query on it's own ran without complaint, we also then knew that the problem had to be with a conversion happening during the insert itself. Something in the selected data wasn't a valid date, but was being inserted into a date field. Although dates and times and involved in your calculation of varianceBetweenCalls, variance itself returns a numeric data type. Therefore I deduced the problem had to be with the data returned by SUBSTRING(table_test.date, 1, 10) which was being inserted into the date field.
As per the comments, this turned out to be correct. You can exclude the bad data and allow the insert to work by adding the clause:
WHERE
table_test.date NOT LIKE '841%'
AND table_test.DurationSinceLastCall NOT LIKE '841%' -- I actually think this line is not required.
Alternatively, you can retrieve only the bad data (with a view to fixing it), by removing the INSERT and using the clause
WHERE
table_test.date LIKE '841%'
OR table_test.DurationSinceLastCall LIKE '841%' -- I actually think this line is not required.
or better
SELECT *
FROM creditfix.call_logs m
WHERE m.date LIKE '841%'
However, I'm not sure the data type of that field, so you may need to to it like this:
SELECT *
FROM creditfix.call_logs m
WHERE SUBSTRING(m.date,10) LIKE '841%'
Once you correct the offending data, you should be able to remove the "fix" from your INSERT/SELECT statement, though it would be wise to investigate how the bad data got into the system.
Related
I have been trying to do this in many ways suggested.
Note: we do not want aggregate function or Partition since this is just a small part of whole Stored procedure and this is client requirement to not have it, so not in option and not possible duplicate of other existing answers / questions
I have a messages table, which has a column from and to, a foreign key to the user table, basically which user sends to whom at simplest. I also have other columns which are isSnoozed and snoozeAt for if the message is snoozed.
So the ordering is according to case. If messages is snoozed then consider snoozeAt time to Order or if not then consider sendAt. (right now we can ignore this condition while ordering, But I mentioned this since we cannot take simply MAX(id) )
I need to get recent most message from messages group by from user id
messages table like :
id -- to -- from -- isSnoozed -- snoozedAt -- sendAt ...
What I tried :
select * from ( select * from messages order by sendAt DESC) as TEMP GROUP BY TEMP.from
I tried many similar approaches but none worked.
I wasted many paid hours but can't find an approach which meets my exact requirement
NOTE: Please ignore typo in query if any, since I cant type in exact query table and names, So i typed in directly here
I figured this out by doing something like this, which could be explained in a simplified way:
select * from message where message.id in (
select
( select id from message where message.from = user.id order by CASE isSnoozed WHEN 0 THEN sendAt ELSE snoozeAt END DESC limit 1) as id
from user where user.id in ( select friends.`whoIsAdded` from friends where friends.`whoAdded` = myId)
) order by CASE isSnoozed WHEN 0 THEN sendAt ELSE snoozeAt END DESC
If I understand correctly, you just want the largest value in one of two columns. Assuming the values are never NULL, you can use greatest():
select m.*
from messages m
where greatest(m.sendAt, m.snoozedAt) =
(select max(greatest(m2.sendAt, m2.snoozedAt))
from messages m2
where m2.from = m.from
);
If the columns can be NULL, then you can use coalesce() to give them more reasonable values.
I'm stucked in a MySQL problem that I was not able to find a solution yet. I have the following query that brings to me the month-year and the number new users of each period in my platform:
select
u.period ,
u.count_new as new_users
from
(select DATE_FORMAT(u.registration_date,'%Y-%m') as period, count(distinct u.id) as count_new from users u group by DATE_FORMAT(u.registration_date,'%Y-%m')) u
order by period desc;
The result is the table:
period,new_users
2016-10,103699
2016-09,149001
2016-08,169841
2016-07,150672
2016-06,148920
2016-05,160206
2016-04,147715
2016-03,173394
2016-02,157743
2016-01,173013
So, I need to calculate for each month-year the difference between the period and the last month-year. I need a result table like this:
period,new_users
2016-10,calculate(103699 - 149001)
2016-09,calculate(149001- 169841)
2016-08,calculate(169841- 150672)
2016-07,So on...
2016-06,...
2016-05,...
2016-04,...
2016-03,...
2016-02,...
2016-01,...
Any ideas: =/
Thankss
You should be able to use a similar approach as I posted in another S/O question. You are on a good track to start. You have your inner query get the counts and have it ordered in the final direction you need. By using inline mysql variables, you can have a holding column of the previous record's value, then use that as computation base for the next result, then set the variable to the new balance to be used for each subsequent cycle.
The JOIN to the SqlVars alias does not have any "ON" condition as the SqlVars would only return a single row anyhow and would not result in any Cartesian product.
select
u.period,
if( #prevCount = -1, 0, u.count_new - #prevCount ) as new_users,
#prevCount := new_users as HoldColumnForNextCycle
from
( select
DATE_FORMAT(u.registration_date,'%Y-%m') as period,
count(distinct u.id) as count_new
from
users u
group by
DATE_FORMAT(u.registration_date,'%Y-%m') ) u
JOIN ( select #prevCount := -1 ) as SqlVars
order by
u.period desc;
You may have to play with it a little as there is no "starting" point in counts, so the first entry in either sorted direction may look strange. I am starting the "#prevCount" variable as -1. So the first record processed gets a new user count of 0 into the "new_users" column. THEN, whatever was the distinct new user count was for the record, I then assign back to the #prevCount as the basis for all subsequent records being processed. yes, it is an extra column in the result set that can be ignored, but is needed. Again, it is just a per-line place-holder and you can see in the result query how it gets its value as each line progresses...
I would create a temp table with two columns and then fill it using a cursor that
does something like this (don't remember the exact syntax - so this is just a pseudo-code):
#val = CURSOR.col2 - (select col2 from OriginalTable t2 where (t2.Period = (CURSOR.Period-1) )))
INSERT tmpTable (Period, NewUsers) Values ( CURSOR.Period, #val)
I need some assistance with my extract. Below is a view of my data and how it is extract from a MS SQL database.
My challenge is that the database does not differentiate from the different "email address" . How do I link email address record to the record above.
Secid|Name|Question|Answer|
2|load1|Name of Principle|Joe Make|
2|load1|Contact Number|12234423|
2|load1|Email address|joemake#mymail.com|
2|load1|Name of Principle|Amy Soup|
2|load1|Contact Number of Principle|23134|
2|load1|Email address|amysoup#mymail.com|
2|load1|Name of Teacher|james blue|
2|load1|Contact Number|8787878|
2|load1|Email Address|jamesblue#mymail.com|
2|load1|Name of Secretary|CHARLES black|
2|load1|Contact Number|989897|
2|load1|Email Address|chblack#mymail.com|
If you don't have any column to order by (e.g. a monotonically increasing identity column, or a timestamp), I'm afraid you're honestly out of luck. There is no way to guarantee any sort of ordering of the rows for any query.
What you can do, however, is export the data into an Excel sheet and then look at it manually and put the rows in the right order, assuming you can figure it out. Unfortunately this is really going to be the only way.
If you had a column you could order by, you can use a join to group the rows, assuming you had a way of identifying the start of each set - in your case a Question like 'Name of %' should probably work. Assuming an identity column called Id, something like:
select t.*, tGroupStart.Id as GroupId
from myTable t
join myTable tGroupStart on tGroupStart.Id <= t.Id
and tGroupStart.Question like 'Name of %'
where not exists (
select 1
from myTable t2
where t2.Id <= t.Id
and t2.Question like 'Name of %'
and t2.Id > tGroupStart.Id
)
I am running a series of queries starting with Selecting all from a view and posting it into a newly-wiped table:
DELETE FROM GTMP_PROGRESS
WHERE OBJID IS NOT NULL;
INSERT IGNORE INTO GTMP_PROGRESS
SELECT * FROM GTMP_PROG_VIEW;
Those queries work fine. The data in GTMP_PROGRESS looks exactly the same as the data in the view.
THEN- I run a series of JOINs with individual tables to update GTMP_PROGRESS based on criteria. Here are a couple of examples:
UPDATE IGNORE GTMP_PROGRESS AS gtmp
JOIN (SELECT cast(EU AS UNSIGNED) AS eu
, MAX(START_TIME) AS max
, CLUSTER_COMPLETE AS complete
FROM `BENIN_CLEAN`
GROUP BY EU) AS benin
ON gtmp.EUID = benin.EU
SET gtmp.Completed_Date = benin.max
WHERE benin.complete>0;
UPDATE IGNORE GTMP_PROGRESS AS gtmp
JOIN (SELECT cast(EU AS UNSIGNED) AS eu
, MAX(START_TIME) AS max
, CLUSTER_COMPLETE AS complete
FROM `IMPACT_CAMBODIA_CLEAN`
GROUP BY EU) AS cambodia
ON gtmp.EUID = cambodia.EU
SET gtmp.Completed_Date = cambodia.max
WHERE cambodia.complete>0;
This continues for around 30 joins. At the end of these JOIN queries, I have 577 Completed_Date field entries for the following datetime: "2013-03-08 01:50:31". The EUs with this datetime are NOT the EUs that should be affected by the JOIN queries... these are entries that existed in the original VIEW table. Before these JOIN queries are run, the values for the Completed_Date are either another datetime or the Completed_Date field is NULL.
A bit of history is that these queries worked perfectly before the Completed_Date field underwent a standardization of all of the dates, so that they qualified as datetime fields. The _CLEAN tables always used datetime format exclusively, so I don't know if that could have caused the issue.
I cannot make ANY sense of this.
i have a nvarchar(max) field named PublishTime, which contains the time in HH:MMtt (10:20AM), now i want to sort the results based on this field, but as it is the nvarchar type, i cannot use it for order by clause, can anyone help me how can i sort results based on this field.
I tried below query which is throwing conversion error
select c.PublishDate, c.PublishTime from CrossArticle_Article c
inner join CrossArticle_ArticleToCategory a2c
on c.Id = a2c.ArticleId
inner join CrossArticle_Category cc
on a2c.CategoryId = cc.Id
where cc.Id = 86 order by c.PublishDate, convert(nvarchar(max), cast(c.PublishTime as time)) desc
In SQL 2008, there is a TIME datatype which is ideal here - I'd really recommend switching the datatype of that field - use the smallest, most appropriate datatype for the data it's going hold.
Assuming all the values are valid times, here's a full example to demonstrate:
CREATE TABLE #Example
(
PublishTime NVARCHAR(MAX)
)
-- Insert some sample data
INSERT #Example (PublishTime) VALUES ('10:20AM'), ('10:20PM'), (NULL), ('')
-- Demonstrate the NVARCHAR -> TIME conversion
SELECT PublishTime, CAST(PublishTime AS TIME) AS ConvertedToTimeDataType
FROM #Example
-- This will change the datatype on the table
ALTER TABLE #Example
ALTER COLUMN PublishTime TIME
-- Now check what is now in the table
SELECT * FROM #Example
DROP TABLE #Example
So, if you can switch the datatype, you can then do:
SELECT Something
FROM YourTable
ORDER BY PublishTime
No converting/special handling of the field required, which means if there's a suitable index in place on PublishTime, it will be able to be used.
try this:
select c.PublishDate, c.PublishTime from CrossArticle_Article c
inner join CrossArticle_ArticleToCategory a2c
on c.Id = a2c.ArticleId
inner join CrossArticle_Category cc
on a2c.CategoryId = cc.Id
where cc.Id = 86 order by cast(c.PublishTime as TIME) desc
You need to make a datetime field. You have date and time in separate fields as strings, which is an antipattern to be avoided.
You can utilize CROSS APPLY to refer to the concatenated datetime field you make in the order by:
CROSS APPLY (SELECT CAST((PUBLISHDATE + ' ' +PUBLISHTIME) as datetime)) CxA(FullDate)
ORDER BY FullDate
Erm What's this?
order by c.PublishDate, convert(nvarchar(max), cast(c.PublishTime as time)) desc
That's casting a string as a time and then converting to a string.
it should be
order by c.PublishDate, cast(c.PublishTime as time)) desc
or convert if it won't cast for some reason.
Like others though I'd build data and time in to one value, and then convert / cast that for ordering.
I'd also seriously look at doing something when bringing the data in, to get a date time so I could index it, unless this was a very infrequent requirement. It's going to be horribly inefficient.