This question is related to a question I published a while ago, and can be found here: Update values of database with values that are already in DB .
I've the following situation: A table that stores data from different sensors (I've a total of 8 sensors). Each row of the table has the following structure:
SensorID --- TimestampMS --- RawData --- Data
So, for example, for a temperature sensor called TEMPSensor1 I have the following:
TEMPSensor1 --- 1000 --- 200 --- 2 TEMPSensor1 --- 2000 --- 220 --- 2.2
And so on, for each sensor (in total I've 8). I've some problems reading the data, and there are rows which data "is not correct". Precisely when the rawdata field is 65535, I should update that particular row. And what I would like to do is put the next value (in time) to that "corrupted data". So, if we have this:
TEMPSensor1 --- 1000 --- 200 --- 2 TEMPSensor1 --- 2000 --- 220 --- 2.2
TEMPSensor1 --- 3000 --- 65535 --- 655.35 TEMPSensor1 --- 4000 --- 240 --- 2.4
After doing the Update, the content of the table should be changed to:
TEMPSensor1 --- 1000 --- 200 --- 2 TEMPSensor1 --- 2000 --- 220 --- 2.2
TEMPSensor1 --- 3000 --- 240 --- 2.4 TEMPSensor1 --- 4000 --- 240 --- 2.4
I've ended up doing the following:
UPDATE externalsensor es1
INNER JOIN externalsensor es2 ON es1.sensorid = es2.sensorid AND (es2.timestampms - es1.timestampms) > 60000 AND (es2.timestampms - es1.timestampms) < 120000 AND es1.rawdata <> 65535
SET es1.rawdata = es2.rawdata, es1.data = es2.data
WHERE es1.rawdata = 65535
Because I know that between two reads from a sensor there are between 60000 and 120000 ms. However, if I have two following "corrupted" readings, that won't work. Can anyone suggest a way to do this more efficiently, with no use of subquery selects, but JOINS? My idea would be to have a JOIN that gives you all the possible values for that sensor after its timestampms, and just get the first one, but I don't know how I can limit that JOIN result.
Appreciate.
Here's a solution without correlated subqueries, but with a triangular join (not sure which is worse):
UPDATE externalsensor bad
INNER JOIN (
SELECT
es1.SensorID,
es1.TimestampMS,
MIN(es2.TimestampMS) AS NextGoodTimestamp
FROM externalsensor es1
INNER JOIN externalsensor es2
ON es1.SensorID = es2.SensorID AND
es1.TimestampMS < es2.TimestampMS
WHERE es1.RawData = 65535
AND es2.RawData <> 65535
GROUP BY
es1.SensorID,
es1.TimestampMS
) link ON bad.SensorID = link.SensorID AND
bad.TimestampMS = link.TimestampMS
INNER JOIN externalsensor good
ON link.SensorID = good.SensorID AND
link.NextGoodTimestamp = good.TimestampMS
SET
bad.RawData = good.RawData,
bad.Data = good.Data
It is assumed that the timestamps are unique within a single sensor group.
A completely different approach, using a procedure that runs through the whole table (in descending time order for every sensor):
DELIMITER $$
CREATE PROCEDURE updateMyTable()
BEGIN
SET #dummy := -9999 ;
SET #dummy2 := -9999 ;
SET #sensor := -999 ;
UPDATE myTable m
JOIN
( SELECT n.SensorID
, n.TimestampMS
, #d := (n.RawData = 65535) AND (#sensor = n.SensorID) AS problem
, #dummy := IF(#d, #dummy, n.RawData) as goodRawData
, #dummy2 := IF(#d, #dummy2, n.Data) as goodData
, #sensor := n.SensorID AS previous
FROM myTable n
ORDER BY n.SensorID
, n.TimeStampMS DESC
) AS upd
ON m.SensorID = upd.SensorID
AND m.TimeStampMS = upd.TimeStampMS
SET m.RawData = upd.goodRawData
, m.Data = upd.goodData
WHERE upd.problem
;
END$$
DELIMITER ;
Since you don't want to use Dems solution from the previous question, here's a "solution" with JOIN's:
UPDATE myTable m
JOIN myTable n
ON m.SensorID = n.SensorID
AND n.RawData <> 65535
AND m.TimestampMS < n.TimestampMS
JOIN myTable q
ON n.SensorID = q.SensorID
AND q.RawData <> 65535
AND n.TimestampMS <= q.TimestampMS
SET
m.RawData = n.RawData,
m.Data = n.Data
WHERE
m.RawData = 65535
;
EDIT
My query above is wrong, dead wrong. It appears to be working in my test db but the logic is flawed. I'll explain below.
Why the above query works fine but is dead wrong:
First, why it's wrong.
Because it will not return one row for every (sensorID, bad timestamp) combination but many rows. If m (m.TimestampMS) is the bad timestamp we want to find, it will return all combinations of that bad timetsamp and later good timestamps n and q with n.TimestampMS <= q.TimestampMS. It would be a correct query if it found the MINIMUM of these n timestamps.
Now, how come it actually works all right in my test db?
I think it's because MySQL, when it comes to use the SET ... and has a lot of options (rows) it just uses first option. But lucky me, I added the test rows in increasing timestamp order so they were saved in that order in the db, and (again) lucky me, this is how the query plan is scheduled (I presume).
Even this query works in my test db:
UPDATE myTable m
JOIN myTable n
ON m.SensorID = n.SensorID
AND n.RawData <> 65535
AND m.TimestampMS < n.TimestampMS
SET
m.RawData = n.RawData,
m.Data = n.Data
WHERE
m.RawData = 65535
;
while being flawed for the same reasons.
Related
I want to fetch data from mysql with seqlpro in R but when I run the query it takes ages.
here is my code :
old_value<- data.frame()
new_value<- data.frame()
counter<- 0
for (i in 1:length(short_list$id)) {
mydb = OpenConn(dbname = '**', user = '**', password = '**', host = '**')
query <- paste0("select * from table where id IN (",short_list$id[i],") and country IN ('",short_list$country[i],"') and date >= '2019-04-31' and `date` <= '2020-09-1';", sep = "" )
temp_old <- RMySQL::dbFetch(RMySQL::dbSendQuery(mydb, query), n = -1
query <- paste0("select * from table2 where id IN (",short_list$id[i],") and country IN ('",short_list$country[i],"') and date >= '2019-04-31' and `date` <= '2020-09-1';", sep = "" )
temp_new <- RMySQL::dbFetch(RMySQL::dbSendQuery(mydb, query), n = -1)
RMySQL::dbDisconnect(mydb)
new_value<- rbind(temp_new,new_value)
old_value<- rbind(temp_old,old_value)
counter=counter+1
base::print(paste("completed for ",counter),sep="")
}
is there any way that I can writ it more efficient and call the queries faster because i have around 5000 rows which should go into the loop. Actually this query works but it takes time.
I have tried this but still it gives me error :
#parralel computing
clust <- makeCluster(length(6))
clusterEvalQ(cl = clust, expr = lapply(c('data.table',"RMySQL","dplyr","plyr"), library, character.only = TRUE))
clusterExport(cl = clust, c('config','short_list'), envir = environment())
new_de <- parLapply(clust, short_list, function(id,country) {
for (i in 1:length(short_list$id)) {
mydb = OpenConn(dbname = '*', user = '*', password = '*', host = '**')
query <- paste0("select * from table1 where id IN (",short_list$id[i],") and country IN ('",short_list$country[i],"') and source_event_date >= date >= '2019-04-31' and `date` <= '2020-09-1';", sep = "" )
temp_data <- RMySQL::dbFetch(RMySQL::dbSendQuery(mydb, query), n = -1) %>% data.table::data.table()
RMySQL::dbDisconnect(mydb)
return(temp_data)}
})
stopCluster(clust)
gc(reset = T)
new_de <- data.table::rbindlist(new_de, use.names = TRUE)
I have also defined the list of short_list as following:
short_list<- as.list(short_list)
and inside short_list is:
id. country
2 US
3 UK
... ...
However it gives me this error:
Error in checkForRemoteErrors(val) :
one node produced an error: object 'i' not found
However when I remove i from the id[i] and country[i] it only give me the first row result not get all ids and country result.
I think an alternative is to upload the ids you need into a temporary table, and query for everything at once.
tmptable <- "mytemptable"
dbWriteTable(conn, tmptable, short_list, create = TRUE)
alldat <- dbGetQuery(conn, paste("
select t1.*
from ", tmptable, " tmp
left join table1 t1 on tmp.id=t1.id and tmp.country=t1.country
where t1.`date` >= '2019-04-31' and t1.`date` <= '2020-09-1'"))
dbExecute(conn, paste("drop table", tmptable))
(Many DBMSes use a leading # to indicate a temporary table that is only visible to the local user, is much less likely to clash in the schema namespace, and is automatically cleaned when the connection is closed. I generally encourage use of temp-tables here, check with your DB docs, schema, and/or DBA for more info here.)
The order of tables is important: by pulling all from mytemptable and then left join table1 onto it, we are effectively filtering out any data from table1 that does not include a matching id and country.
This doesn't solve the speed of data download, but some thoughts on that:
Each time you iterate through the queries, you have not-insignificant overhead; if there's a lot of data then this overhead should not be huge, but it's still there. Using a single query will reduce this overhead significantly.
Query time can also be affected by any index(ices) on the tables. Outside the scope of this discussion, but might be relevant if you have a large-ish table. If the table is not indexed efficiently (or the query is not structured well to use those indices), then each query will take a finite amount of time to "compile" and return data. Again, overhead that will be reduced with a single more-efficient query.
Large queries might benefit from using the command-line tool mysql; it is about as fast as you're going to get, and might iron over any issues in RMySQL and/or DBI. (I'm not saying they are inefficient, but ... it is unlikely that a free open-source driver will be faster than MySQL's own command-line utility.
As for doing this in parallel ...
You're using parLapply incorrectly. It accepts a single vector/list and iterates over each object in that list. You might use it iterating over the indices of a frame, but you cannot use it to iterate over multiple columns within that frame. This is exactly like base R's lapply.
Let's show what is going on when you do your call. I'll replace it with lapply (because debugging in multiple processes is difficult).
# parLapply(clust, mtcars, function(id, country) { ... })
lapply(mtcars, function(id, country) { browser(); 1; })
# Called from: FUN(X[[i]], ...)
debug at #1: [1] 1
id
# [1] 21.0 21.0 22.8 21.4 18.7 18.1 14.3 24.4 22.8 19.2 17.8 16.4 17.3 15.2 10.4 10.4 14.7 32.4 30.4 33.9 21.5 15.5 15.2
# [24] 13.3 19.2 27.3 26.0 30.4 15.8 19.7 15.0 21.4
country
# Error: argument "country" is missing, with no default
Because the argument (mtcars here, short_list in yours) is a data.frame, since it is a list-like object, lapply (and parLapply) operate on each column at a time. You were hoping that it would "unzip" the data, applying the first column's value to id and the second column's value to country. In fact, the is a function that does this: Map (and the parallel's clusterMap, as I suggested in my comment). More on that later.
The intent of parallelizing things is to not use the for loop inside the parallel function. If short_list has 10 rows, and if your use of parLapply were correct, then you would be querying all rows 10 times, making your problem significantly worse. In pseudo-code, you'd be doing:
parallelize for each row in short_list:
# this portion is run simultaneously in 10 difference processes/threads
for each row in short_list:
query for data related to this row
Two alternatives:
Provide a single argument to parLapply representing the rows of the frame.
new_de <- new_de <- parLapply(clust, seqlen(NROW(short_list)), function(rownum) {
mydb = OpenConn(dbname = '*', user = '*', password = '*', host = '**')
on.exit({ DBI::dbDisconnect(mydb) })
tryCatch(
DBI::dbGetQuery(mydb, "
select * from table1
where id=? and country=?
and source_event_date >= date >= '2019-04-31' and `date` <= '2020-09-1'",
params = list(short_list$id[rownum], short_list$country[rownum])),
error = function(e) e)
})
Use clusterMap for the same effect.
new_de <- clusterMap(clust, function(id, country) {
mydb = OpenConn(dbname = '*', user = '*', password = '*', host = '**')
on.exit({ DBI::dbDisconnect(mydb) })
tryCatch(
DBI::dbGetQuery(mydb, "
select * from table1
where id=? and country=?
and source_event_date >= date >= '2019-04-31' and `date` <= '2020-09-1'",
params = list(id, country)),
error = function(e) e)
}, short_list$id, short_list$country)
If you are not familiar with Map, it is like "zipping" together multiple vectors/lists. For example:
myfun1 <- function(i) paste(i, "alone")
lapply(1:3, myfun1)
### "unrolls" to look like
list(
myfun1(1),
myfun1(2),
myfun1(3)
)
myfun3 <- function(i,j,k) paste(i, j, k, sep = '-')
Map(f = myfun3, 1:3, 11:13, 21:23)
### "unrolls" to look like
list(
myfun3(1, 11, 21),
myfun3(2, 12, 22),
myfun3(3, 13, 23)
)
Some liberties I took in that adapted code:
I shifted from the dbSendQuery/dbFetch double-tap to a single call to dbGetQuery.
I'm using DBI functions, since DBI functions provide a superset of what each driver's package provides. (You're likely using some of it anyway, perhaps without realizing it.) You can switch back with no issue.
I added tryCatch, since sometimes errors can be difficult to deal with in parallel processes. This means you'll need to check the return value from each of your processes to see if either inherits(ret, "error") (problem) or is.data.frame (normal).
I used on.exit so that even if there's a problem, the connection closure should still occur.
How can I remove unreal data that I'm getting after several joins that I ran.
my entire Query is:
SELECT
distinct vortex_dbo.vw_public_material_location.material_name
,vw_public_request_material_location_mir.material_request_id
,vw_public_request_material_location_mir.parttype_name
,operation_code
,vw_public_request_material_location_mir.result_name
,vw_public_request_material_location_mir.qdf_number
, requestor
,[vortex_hvc].[vortex_dbo].[material_request].created_by
,[vortex_hvc].[vortex_dbo].[material_request].created_datetime as time1
,[vortex_hvc].[vortex_dbo].[material_request].distribution_list
,[vortex_hvc].[vortex_dbo].[material_request].recipient_name
, DATEPART(WW,[vortex_hvc].[vortex_dbo].[material_request].created_datetime) as WW
,vw_public_request_material_location_mir.product_code_name
,task_name
,vw_public_request_material_location_mir.full_location_name
FROM [vortex_hvc].[vortex_dbo].[vw_public_request_material_location_mir]
left join request on vw_public_request_material_location_mir.material_request_id = request.request_key
left join vortex_dbo.material_request on vw_public_request_material_location_mir.material_request_id = vortex_dbo.material_request.material_request_id
left join vortex_dbo.vw_public_material_location on vw_public_request_material_location_mir.last_result_id = vortex_dbo.vw_public_material_location.last_result_id
left join vortex_dbo.vw_public_material_history on vw_public_request_material_location_mir.material_request_id like (substring(vw_public_material_history.comments,12,6))
where (vw_public_request_material_location_mir.qdf_number not like 'null' and vw_public_request_material_location_mir.qdf_number not like '')
and vw_public_request_material_location_mir.product_code_name like 'LAKE%'
and vw_public_request_material_location_mir.task_id not like 'null'
and (vw_public_request_material_location_mir.result_name like 'bin 100' or vw_public_request_material_location_mir.result_name like 'bin 01'
or vw_public_request_material_location_mir.result_name like 'bin 02' or vw_public_request_material_location_mir.result_name like 'pass')
and (requestor like 'BUGANIM, RINAT' and employee_name like 'BUGANIM, RINAT')
and ( DateDiff(DD,[vortex_hvc].[vortex_dbo].[material_request].created_datetime, getdate()) < 180)
and (concat('',substring(vortex_dbo.vw_public_material_location.comments,12,6)) like vw_public_request_material_location_mir.material_request_id
or vortex_dbo.vw_public_material_location.comments like 'Changed by Matrix Transaction Handler' or vortex_dbo.vw_public_material_location.comments like 'Unit Ownership:%')
and (unit_number = vortex_dbo.vw_public_material_location.material_name or unit_number is null)
and vortex_dbo.vw_public_material_location.material_name like 'D7QM748200403'
order by vortex_dbo.vw_public_material_location.material_name desc
The results I'm getting are:
two rows that only the 2nd one contains true data.
material_name material_request_id parttype_name operation_code result_name qdf_number requestor created_by time1 WW product_code_name task_name full_location_name
D7QM748200403 332160 H6 4GXDCV K Y 7295 BIN 01 Q1T5 BUGANIM, RINAT SMS_Interface 2017-12-03 20:27:30.327 49 CANNON LAKE Y 2+2 PPV-M SAMPLE: QDF INVENTORY
D7QM748200403 332176 H6 4GXDCV K Y 7295 BIN 01 Q1T5 BUGANIM, RINAT SMS_Interface 2017-12-03 21:02:33.247 49 CANNON LAKE Y 2+2 PPV-M SAMPLE: QDF INVENTORY
What can I do in order to retrieve true data only?, I have more cases like this.
Thanks!!
I make a cohort analysis processor. Input parameters: time range and step, condition (initial event) to exctract cohorts, additional condition (retention event) to check after each N hours/days/months. Output parameters: cohort analysis grid, like this:
0h | 16h | 32h | 48h | 64h | 80h | 96h |
cohort #00 15 | 6 | 4 | 1 | 1 | 2 | 2 |
cohort #01 1 | 35 | 8 | 0 | 2 | 0 | 1 |
cohort #02 0 | 3 | 31 | 11 | 5 | 3 | 0 |
cohort #03 0 | 0 | 4 | 27 | 7 | 6 | 2 |
cohort #04 0 | 1 | 1 | 4 | 29 | 4 | 3 |
Basically:
fetch cohorts: unique users who did something 1 in every period from time_begin every time_step.
find how many of them (in each cohort) did something 2 after N seconds, N*2 seconds, N*3, and so on until now.
In short - I have 2 solutions. One works too slow and includes a heavy select with joins for each data step: 1 day, 2 day, 3 day, etc. I want to optimize it by joining result for every data step to cohorts - and it's the second solution. It looks like it works but I'm not sure it's the best way and that it will give the same result even if cohorts will intersect. Please check it out.
Here's the whole story.
I have a table of > 100,000 events, something like this:
#user-id, timestamp, event_name
events_view (uid varchar(64), tm int(11), e varchar(64))
example input row:
"user_sampleid1", 1423836540, "level_end:001:win"
To make a cohort analisys first I extract cohorts: for example, users, who send special event '1st_launch' in 10 hour periods starting from 2015-02-13 and ending with 2015-02-16. All code in this post is simplified and shortened to see the idea.
DROP TABLE IF EXISTS tmp_c;
create temporary table tmp_c (uid varchar(64), tm int(11), c int(11) );
set beg = UNIX_TIMESTAMP('2015-02-13 00:00:00');
set en = UNIX_TIMESTAMP('2015-02-16 00:00:00');
select min(tm) into t_start from events_view ;
select max(tm) into t_end from events_view ;
if beg < t_start then
set beg = t_start;
end if;
if en > t_end then
set en = t_end;
end if;
set period = 3600 * 10;
set cnt_c = ceil((en - beg) / period) ;
/*works quick enough*/
WHILE i < cnt_c DO
insert into tmp_c (
select uid, min(tm), i from events_view where
locate("1st_launch", e) > 0 and tm > (beg + period * i)
AND tm <= (beg + period * (i+1)) group by uid );
SET i = i+1;
END WHILE;
Cohorts may consist the same user ids, though usually one user is exist only in one cohort. And in each cohort users are unique.
Now I have temp table like this:
user_id | 1st timestamp | cohort_no
uid1 1423836540 0
uid2 1423839540 0
uid3 1423841160 1
uid4 1423841460 2
...
uidN 1423843080 M
Then I need to again divide time range on periods and calculate for each period how many users from each cohort have sent event "level_end:001:win".
For each small period I select all unique users who have sent "level_end:001:win" event and left join them to tmp_c cohorts table. So I have something like this:
user_id | 1st timestamp | cohort_no | user_id | other fields...
uid1 1423836540 0 uid1
uid2 1423839540 0 null
uid3 1423841160 1 null
uid4 1423841460 2 uid4
...
uidN 1423843080 M null
This way I see how many users from my cohorts are in those who have sent "level_end:001:win", exclude not found by where clause: where t2.uid is not null.
Finally I perform grouping and have counts of users in each cohort, who have sent "level_end:001:win" in this particluar period.
Here's the code:
DROP TABLE IF EXISTS tmp_res;
create temporary table tmp_res (uid varchar(64) CHARACTER SET cp1251 NOT NULL, c int(11), cnt int(11) );
set i = 0;
set cnt_c = ceil((t_end - beg) / period) ;
WHILE i < cnt_c DO
insert into tmp_res
select concat(beg + period * i, "_", beg + period * (i+1)), c, count(distinct(uid)) from
(select t1.uid, t1.c from tmp_c t1 left join
(select uid, min(tm) from events_view where
locate("level_end:001:win", e) > 0 and
tm > (beg + period * i) AND tm <= (beg + period * (i+1)) group by uid ) t2
on t1.uid = t2.uid where t2.uid is not null) t3
group by c;
SET i = i+1;
END WHILE;
/*getting result of the first method: tooo slooooow!*/
select * from tmp_res;
The result I've got (it's ok that some cohorts are not appear on some periods):
"1423832400_1423890000","1","35"
"1423832400_1423890000","2","3"
"1423832400_1423890000","3","1"
"1423832400_1423890000","4","1"
"1423890000_1423947600","1","21"
"1423890000_1423947600","2","50"
"1423890000_1423947600","3","2"
"1423947600_1424005200","1","9"
"1423947600_1424005200","2","24"
"1423947600_1424005200","3","70"
"1423947600_1424005200","4","6"
"1424005200_1424062800","1","7"
"1424005200_1424062800","2","15"
"1424005200_1424062800","3","21"
"1424005200_1424062800","4","32"
"1424062800_1424120400","1","7"
"1424062800_1424120400","2","13"
"1424062800_1424120400","3","24"
"1424062800_1424120400","4","18"
"1424120400_1424178000","1","10"
"1424120400_1424178000","2","12"
"1424120400_1424178000","3","18"
"1424120400_1424178000","4","14"
"1424178000_1424235600","1","6"
"1424178000_1424235600","2","7"
"1424178000_1424235600","3","9"
"1424178000_1424235600","4","12"
"1424235600_1424293200","1","6"
"1424235600_1424293200","2","8"
"1424235600_1424293200","3","9"
"1424235600_1424293200","4","5"
"1424293200_1424350800","1","5"
"1424293200_1424350800","2","3"
"1424293200_1424350800","3","11"
"1424293200_1424350800","4","10"
"1424350800_1424408400","1","8"
"1424350800_1424408400","2","5"
"1424350800_1424408400","3","7"
"1424350800_1424408400","4","7"
"1424408400_1424466000","2","6"
"1424408400_1424466000","3","7"
"1424408400_1424466000","4","3"
"1424466000_1424523600","1","3"
"1424466000_1424523600","2","4"
"1424466000_1424523600","3","8"
"1424466000_1424523600","4","2"
"1424523600_1424581200","2","3"
"1424523600_1424581200","3","3"
It works but it takes too much time to process because there are many queries here instead of one, so I need to rewrite it.
I think it can be rewritten with joins, but I'm still not sure how.
I decided to make a temporary table and write period boundaries in it:
DROP TABLE IF EXISTS tmp_times;
create temporary table tmp_times (tm_start int(11), tm_end int(11));
set cnt_c = ceil((t_end - beg) / period) ;
set i = 0;
WHILE i < cnt_c DO
insert into tmp_times values( beg + period * i, beg + period * (i+1));
SET i = i+1;
END WHILE;
Then I get periods-to-events mapping (user_id + timestamp represent particular event) to temp table and left join it to cohorts table and group the result:
SELECT Concat(tm_start, "_", tm_end) per,
t1.c coh,
Count(DISTINCT( t2.uid ))
FROM tmp_c t1
LEFT JOIN (SELECT *
FROM tmp_times t3
LEFT JOIN (SELECT uid,
tm
FROM events_view
WHERE Locate("level_end:101:win", e) > 0)
t4
ON ( t4.tm > t3.tm_start
AND t4.tm <= t3.tm_end )
WHERE t4.uid IS NOT NULL
ORDER BY t3.tm_start) t2
ON t1.uid = t2.uid
WHERE t2.uid IS NOT NULL
GROUP BY per,
coh
ORDER BY per,
coh;
In my tests this returns the same result as method #1. I can't check the result manually, but I understand how method #1 work more and as far I can see it gives what I want. Method #2 is faster, but I'm not sure it's the best way and it will give the same result even if cohorts will intersect.
Maybe there are well-known common methods to perform a cohort analysis in SQL? Is method #1 I use more reliable than method #2? I work with joins not that often, that's why still do not fully understand joins magic yet.
Method #2 looks like pure magic, and I used to not believe in what I don't understand :)
Thanks for answers!
I have a bigger rapidly growing data set of around 4 million rows, in order to define and exclude the outliers (for statistics / analytics usage) I need the algorithm to consider all entries in this data set. However this is too much data to load into memory and my system chokes. I'm currently using this to collect and process the data:
#scoreInnerFences = innerFence Post.where( :source => 1 ).
order( :score ).
pluck( :score )
Using the typical divide and conquer method won't work, I don't think because every entry has to be considered to keep my outlier calculation accurate. How can this be achieved efficiently?
innerFence identifies the lower quartile and upper quartile of the data set, then uses those findings to calculate the outliers. Here is the (yet to be refactored, non-DRY) code for this:
def q1(s)
q = s.length / 4
if s.length % 2 == 0
return ( s[ q ] + s[ q - 1 ] ) / 2
else
return s[ q ]
end
end
def q2(s)
q = s.length / 4
if s.length % 2 == 0
return ( s[ q * 3 ] + s[ (q * 3) - 1 ] ) / 2
else
return s[ q * 3 ]
end
end
def innerFence(s)
q1 = q1(s)
q2 = q2(s)
iq = (q2 - q1) * 3
if1 = q1 - iq
if2 = q2 + iq
return [if1, if2]
end
This is not the best way, but it is an easy way:
Do several querys. First you count the number of scores:
q = Post.where( :source => 1 ).count
then you do your calculations
then you fetch the scores
q1 = Post.where( :source => 1 ).
reverse_order(:score).
select("avg(score) as score").
offset(q).limit((q%2)+1)
q2 = Post.where( :source => 1 ).
reverse_order(:score).
select("avg(score) as score").
offset(q*3).limit((q%2)+1)
The code is probably wrong but I'm sure you get the idea.
For large datasets, I sometimes drop down below ActiveRecord. It's a memory hog, even I imagine, using pluck. Of course it's less portable, but sometimes it's worth it.
scores = Post.connection.execute('select score from posts where score > 1 order by score').map(&:first)
Don't know if that will help enough for 4 million record. If not, maybe look at a stored procedure?
I was looking around and found no solution to this. I´d be glad if someone could help me out here:
I have a table, e.g. that has among others, following columns:
Vehicle_No, Stop1_depTime, Segment_TravelTime, Stop_arrTime, Stop_Sequence
The data might look something like this:
Vehicle_No Stop1_DepTime Segment_TravelTime Stop_Sequence Stop_arrTime
201 13000 60 1
201 13000 45 2
201 13000 120 3
201 13000 4
202 13300 240 1
202 13300 60 2
...
and I need to calculate the arrival time at each stop from the departure time at the first stop and the travel times in between for each vehicle. What I need in this case would look like this:
Vehicle_No Stop1_DepTime Segment_TravelTime Stop_Sequence Stop_arrTime
201 13000 60 1
201 13000 45 2 13060
201 13000 120 3 13105
201 13000 4 13225
202 13300 240 1
202 13300 60 2 13540
...
I have tried to find a solution for some time but was not successful - Thanks for any help you can give me!
Here is the query that still does not work - I am sure I did something wrong with getting the table from the database into this but dont know where. Sorry if this is a really simple error, I have just begun working with MSSQL.
Also, I have implemented the solution provided below and it works. At this point I mainly want to understand what went wrong here to learn about it. If it takes too much time, please do not bother with my question for too long. Otherwise - thanks a lot :)
;WITH recCTE
AS
(
SELECT ZAEHL_2011.dbo.L32.Zaehl_Fahrt_Id, ZAEHL_2011.dbo.L32.PlanAbfahrtStart, ZAEHL_2011.dbo.L32.Fahrzeit, ZAEHL_2011.dbo.L32.Sequenz, ZAEHL_2011.dbo.L32.PlanAbfahrtStart AS Stop_arrTime
FROM ZAEHL_2011.dbo.L32
WHERE ZAEHL_2011.dbo.L32.Sequenz = 1
UNION ALL
SELECT t. ZAEHL_2011.dbo.L32.Zaehl_Fahrt_Id, t. ZAEHL_2011.dbo.L32.PlanAbfahrtStart, t. ZAEHL_2011.dbo.L32.Fahrzeit,t. ZAEHL_2011.dbo.L32.Sequenz, r.Stop_arrTime + r. ZAEHL_2011.dbo.L32.Fahrzeit AS Stop_arrTime
FROM recCTE AS r
JOIN ZAEHL_2011.dbo.L32 AS t
ON t. ZAEHL_2011.dbo.L32.Zaehl_Fahrt_Id = r. ZAEHL_2011.dbo.L32.Zaehl_Fahrt_Id
AND t. ZAEHL_2011.dbo.L32.Sequenz = r. ZAEHL_2011.dbo.L32.Sequenz + 1
)
SELECT ZAEHL_2011.dbo.L32.Zaehl_Fahrt_Id, ZAEHL_2011.dbo.L32.PlanAbfahrtStart, ZAEHL_2011.dbo.L32.Fahrzeit, ZAEHL_2011.dbo.L32.Sequenz, ZAEHL_2011.dbo.L32.PlanAbfahrtStart,
CASE WHEN Stop_arrTime = ZAEHL_2011.dbo.L32.PlanAbfahrtStart THEN NULL ELSE Stop_arrTime END AS Stop_arrTime
FROM recCTE
ORDER BY ZAEHL_2011.dbo.L32.Zaehl_Fahrt_Id, ZAEHL_2011.dbo.L32.Sequenz
A recursive CTE solution - assumes that each Vehicle_No appears in the table only once:
DECLARE #t TABLE
(Vehicle_No INT
,Stop1_DepTime INT
,Segment_TravelTime INT
,Stop_Sequence INT
,Stop_arrTime INT
)
INSERT #t (Vehicle_No,Stop1_DepTime,Segment_TravelTime,Stop_Sequence)
VALUES(201,13000,60,1),
(201,13000,45,2),
(201,13000,120,3),
(201,13000,NULL,4),
(202,13300,240,1),
(202,13300,60,2)
;WITH recCTE
AS
(
SELECT Vehicle_No, Stop1_DepTime, Segment_TravelTime,Stop_Sequence, Stop1_DepTime AS Stop_arrTime
FROM #t
WHERE Stop_Sequence = 1
UNION ALL
SELECT t.Vehicle_No, t.Stop1_DepTime, t.Segment_TravelTime,t.Stop_Sequence, r.Stop_arrTime + r.Segment_TravelTime AS Stop_arrTime
FROM recCTE AS r
JOIN #t AS t
ON t.Vehicle_No = r.Vehicle_No
AND t.Stop_Sequence = r.Stop_Sequence + 1
)
SELECT Vehicle_No, Stop1_DepTime, Segment_TravelTime,Stop_Sequence, Stop1_DepTime,
CASE WHEN Stop_arrTime = Stop1_DepTime THEN NULL ELSE Stop_arrTime END AS Stop_arrTime
FROM recCTE
ORDER BY Vehicle_No, Stop_Sequence
EDIT
Corrected version of OP's query - note that it's not necessary to fully qualify the column names:
;WITH recCTE
AS
(
SELECT Zaehl_Fahrt_Id, PlanAbfahrtStart, Fahrzeit, L32.Sequenz, PlanAbfahrtStart AS Stop_arrTime
FROM ZAEHL_2011.dbo.L32
WHERE Sequenz = 1
UNION ALL
SELECT t.Zaehl_Fahrt_Id, t.PlanAbfahrtStart, t.Fahrzeit,t.Sequenz, r.Stop_arrTime + r.Fahrzeit AS Stop_arrTime
FROM recCTE AS r
JOIN ZAEHL_2011.dbo.L32 AS t
ON t.Zaehl_Fahrt_Id = r.Zaehl_Fahrt_Id
AND t.Sequenz = r.Sequenz + 1
)
SELECT Zaehl_Fahrt_Id, PlanAbfahrtStart, Fahrzeit, Sequenz, PlanAbfahrtStart,
CASE WHEN Stop_arrTime = PlanAbfahrtStart THEN NULL ELSE Stop_arrTime END AS Stop_arrTime
FROM recCTE
ORDER BY Zaehl_Fahrt_Id, Sequenz
I'm quite sure this works:
SELECT a.Vehicle_No, a.Stop1_DepTime,
a.Segment_TravelTime, a.Stop_Sequence, a.Stop1_DepTime +
(SELECT SUM(b.Segment_TravelTime) FROM your_table b
WHERE b.Vehicle_No = a.Vehicle_No AND b.Stop_Sequence < a.Stop_Sequence)
FROM your_table a
ORDER BY a.Vehicle_No