Jruby-rack-worker not starting on WebLogic - jruby

I have a Jruby project that is using background processes that need to be run every x minutes. I am using Jruby-rack-worker + delayed_cron_job library. I have followed the jruby-rack-worker instructions as follow:
Copy the jruby-rack-worker.jar file under lib folder
my web.xml located under config folder:
my worker rb file:
my GEM file:
gem 'jruby-rack-worker', :platform => :jruby, :require => nil
gem 'delayed_cron_job'
After the deployment of the WAR file on WebLogic, I have checked the log file and I can see that after the completion of deployment, It is trying to start the Worker but nothing happening:
####<Jul 7, 2017, 6:48:00,489 PM SGT> <Info> <CONCURRENCY> <voldemort-PC> <AdminServer> <[ACTIVE] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <ef7dc28a-6fde-4131-b1b0-41f2bd3db01b-00000020> <1499424480489> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-2162600> <Creating ManagedThreadFactory "DefaultManagedThreadFactory" (partition="DOMAIN", module="null", application="myrubyapp")>
####<Jul 7, 2017, 6:48:00,489 PM SGT> <Info> <CONCURRENCY> <voldemort-PC> <AdminServer> <[ACTIVE] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <ef7dc28a-6fde-4131-b1b0-41f2bd3db01b-00000020> <1499424480489> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-2162610> <Creating ManagedExecutorService "DefaultManagedExecutorService" (partition="DOMAIN", module="null", application="myrubyapp", workmanager="default")>
####<Jul 7, 2017, 6:48:00,490 PM SGT> <Info> <CONCURRENCY> <voldemort-PC> <AdminServer> <[ACTIVE] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <ef7dc28a-6fde-4131-b1b0-41f2bd3db01b-00000020> <1499424480490> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-2162611> <Creating ManagedScheduledExecutorService "DefaultManagedScheduledExecutorService" (partition="DOMAIN", module="null", application="myrubyapp", workmanager="default")>
####<Jul 7, 2017, 6:48:00,490 PM SGT> <Info> <Deployer> <voldemort-PC> <AdminServer> <[ACTIVE] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <ef7dc28a-6fde-4131-b1b0-41f2bd3db01b-00000020> <1499424480490> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-149059> <Module myrubyapp.war of application myrubyapp is transitioning from STATE_NEW to STATE_PREPARED on server AdminServer.>
####<Jul 7, 2017, 6:48:02,884 PM SGT> <Info> <Deployer> <voldemort-PC> <AdminServer> <[ACTIVE] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <ef7dc28a-6fde-4131-b1b0-41f2bd3db01b-00000020> <1499424482884> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-149060> <Module myrubyapp.war of application myrubyapp successfully transitioned from STATE_NEW to STATE_PREPARED on server AdminServer.>
####<Jul 7, 2017, 6:48:02,911 PM SGT> <Info> <Deployer> <voldemort-PC> <AdminServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <ef7dc28a-6fde-4131-b1b0-41f2bd3db01b-00000021> <1499424482911> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-149059> <Module myrubyapp.war of application myrubyapp is transitioning from STATE_PREPARED to STATE_ADMIN on server AdminServer.>
####<Jul 7, 2017, 6:48:02,927 PM SGT> <Info> <Deployer> <voldemort-PC> <AdminServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <ef7dc28a-6fde-4131-b1b0-41f2bd3db01b-00000021> <1499424482927> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-149060> <Module myrubyapp.war of application myrubyapp successfully transitioned from STATE_PREPARED to STATE_ADMIN on server AdminServer.>
####<Jul 7, 2017, 6:48:03,96 PM SGT> <Info> <ServletContext-/myrubyapp> <voldemort-PC> <AdminServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <weblogic> <> <ef7dc28a-6fde-4131-b1b0-41f2bd3db01b-00000021> <1499424483096> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000000> <INFO: jruby 9.1.5.0 (2.3.1) 2016-09-07 036ce39 Java HotSpot(TM) 64-Bit Server VM 25.131-b11 on 1.8.0_131-b11 +jit [mswin32-x86_64]>
####<Jul 7, 2017, 6:48:03,97 PM SGT> <Info> <ServletContext-/myrubyapp> <voldemort-PC> <AdminServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <weblogic> <> <ef7dc28a-6fde-4131-b1b0-41f2bd3db01b-00000021> <1499424483097> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000000> <INFO: using a shared (threadsafe!) runtime>
####<Jul 7, 2017, 6:48:24,321 PM SGT> <Info> <ServletContext-/myrubyapp> <voldemort-PC> <AdminServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <weblogic> <> <ef7dc28a-6fde-4131-b1b0-41f2bd3db01b-00000021> <1499424504321> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000000> <I, [2017-07-07T10:48:24.309000 #9420] INFO -- : [ActiveJob] [CronJobs] [859cb101-2c4b-4995-ad08-d453c7f26b35] Performing CronJobs from Inline(default)
>
####<Jul 7, 2017, 6:48:24,705 PM SGT> <Info> <Common> <voldemort-PC> <AdminServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <weblogic> <> <ef7dc28a-6fde-4131-b1b0-41f2bd3db01b-00000021> <1499424504705> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000628> <Created "1" resources for pool "myapp-staging-ds", out of which "1" are available and "0" are unavailable.>
####<Jul 7, 2017, 6:48:28,253 PM SGT> <Info> <ServletContext-/myrubyapp> <voldemort-PC> <AdminServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <weblogic> <> <ef7dc28a-6fde-4131-b1b0-41f2bd3db01b-00000021> <1499424508253> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000000> <I, [2017-07-07T10:48:28.253000 #9420] INFO -- : [ActiveJob] [CronJobs] [859cb101-2c4b-4995-ad08-d453c7f26b35] Performed CronJobs from Inline(default) in 3930.0ms
>
####<Jul 7, 2017, 6:48:28,255 PM SGT> <Info> <ServletContext-/myrubyapp> <voldemort-PC> <AdminServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <weblogic> <> <ef7dc28a-6fde-4131-b1b0-41f2bd3db01b-00000021> <1499424508255> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000000> <I, [2017-07-07T10:48:28.255000 #9420] INFO -- : [ActiveJob] Enqueued CronJobs (Job ID: 859cb101-2c4b-4995-ad08-d453c7f26b35) to Inline(default)
>
####<Jul 7, 2017, 6:48:30,292 PM SGT> <Info> <ServletContext-/myrubyapp> <voldemort-PC> <AdminServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <weblogic> <> <ef7dc28a-6fde-4131-b1b0-41f2bd3db01b-00000021> <1499424510292> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000000> <INFO: [org.kares.jruby.rack.DefaultWorkerManager] started 1 worker(s)>
####<Jul 7, 2017, 6:48:30,356 PM SGT> <Info> <Deployer> <voldemort-PC> <AdminServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <ef7dc28a-6fde-4131-b1b0-41f2bd3db01b-00000021> <1499424510356> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-149059> <Module myrubyapp.war of application myrubyapp is transitioning from STATE_ADMIN to STATE_ACTIVE on server AdminServer.>
####<Jul 7, 2017, 6:48:30,356 PM SGT> <Info> <Deployer> <voldemort-PC> <AdminServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <ef7dc28a-6fde-4131-b1b0-41f2bd3db01b-00000021> <1499424510356> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-149060> <Module myrubyapp.war of application myrubyapp successfully transitioned from STATE_ADMIN to STATE_ACTIVE on server AdminServer.>
####<Jul 7, 2017, 6:48:30,377 PM SGT> <Info> <Deployer> <voldemort-PC> <AdminServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <ef7dc28a-6fde-4131-b1b0-41f2bd3db01b-00000022> <1499424510377> <[severity-value: 64] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-149074> <Successfully completed deployment task: [Deployer:149026]deploy application myrubyapp on AdminServer..>
####<Jul 7, 2017, 6:48:30,404 PM SGT> <Info> <ServletContext-/myrubyapp> <voldemort-PC> <AdminServer> <jruby-rack-worker#1> <weblogic> <> <ef7dc28a-6fde-4131-b1b0-41f2bd3db01b-00000021> <1499424510404> <[severity-value: 64] [rid: 0:3] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000000> <I, [2017-07-07T10:48:30.401000 #9420] INFO -- : 2017-07-07T10:48:30+0000: [Worker(host:voldemort-PC pid:9420 thread:jruby-rack-worker#1)] Starting job worker
>
####<Jul 7, 2017, 6:48:34,991 PM SGT> <Info> <Health> <voldemort-PC> <AdminServer> <weblogic.GCMonitor> <<anonymous>> <> <ef7dc28a-6fde-4131-b1b0-41f2bd3db01b-00000006> <1499424514991> <[severity-value: 64] [rid: 0:1] [partition-id: 0] [partition-name: DOMAIN] > <BEA-310002> <21% of the total memory in the server is free.>
####<Jul 7, 2017, 6:50:34,994 PM SGT> <Info> <Health> <voldemort-PC> <AdminServer> <weblogic.GCMonitor> <<anonymous>> <> <ef7dc28a-6fde-4131-b1b0-41f2bd3db01b-00000006> <1499424634994> <[severity-value: 64] [rid: 0:1] [partition-id: 0] [partition-name: DOMAIN] > <BEA-310002> <35% of the total memory in the server is free.>
What is missing here and how can I know if the JobWorker started successfully or why it didn't start ?

you have it right and your log output confirms the worker started, fine!
####<Jul 7, 2017, 6:48:30,404 PM SGT> <Info> <ServletContext-/myrubyapp> <voldemort-PC> <AdminServer> <jruby-rack-worker#1> <weblogic> <> <ef7dc28a-6fde-4131-b1b0-41f2bd3db01b-00000021> <1499424510404> <[severity-value: 64] [rid: 0:3] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000000> <I, [2017-07-07T10:48:30.401000 #9420] INFO -- : 2017-07-07T10:48:30+0000: [Worker(host:voldemort-PC pid:9420 thread:jruby-rack-worker#1)] Starting job worker
you probably did not put anything on DJ's queue ... thus not seeing any work getting done.
delayed_cron_job won't auto-magically start - likely your worker rb file isn't being loaded by anything. you should trigger the loading of such worker files although be careful on restarts as you might need to remove the previous schedules.

Related

Producing different p value each time I 'Knit HTML'

I have embedded some code in my Rmd file to perform t.test on a set of data. But weirdly, each time I click on 'Knit HTML', I see different outputs for p-value in the HTML file. But the same doesn't happen if I am running code in the console. Can somebody please help understand why it must be happening and how I can avoid it?
Below is that piece of code:
```{r, echo=TRUE}
tresults <- matrix(nrow = 4, ncol = 3)
for (i in 1:4){
options(scipen = 999) #Force are to not use exponential notation
tresult <- t.test(years[[i]][,11], years[[i]][,21], var.equal = T, alternative = "greater")
tresults[i,] <- as.numeric(c(tresult[[1]], tresult[[2]], tresult[[3]]))}
tresults <- format(round(tresults, 3), nsmall = 3) #round off to 3 decimal points
tresults <- cbind(c("2004-05", "2005-06", "2006-07", "2007-08"), tresults)
colnames(tresults) <- c("years", "t", "df", "p-value")
print(tresults)
```
Edit:
'years' is basically a list that contains data-frame corresponding to each year -
#List the datasets
years <- list(prod_data_0405, prod_data_0506, prod_data_0607, prod_data_0708)
I have created a sample of this list, converted it to data-frame for the convenience of sharing using following command:
years_sample <- as.data.frame(do.call(rbind, years_sample))
and saved it on this link. Please use it to test the code and let me know.
Edit 2
Here is the sample that I have created using dput(years)
list(structure(list(ID = c(1, 2, 3, 4, 5, 6), Area..acres. = c(4,
1, 2, 3, 1, 1), Cotton = c(2, 2.5, 2, 5, 3, 0), Pigeon.pea = c(0.33,
NaN, 0.5, 0.21, NaN, 0), Soyabean = c(4, NaN, NaN, 6, NaN, NaN
), Sorghum = c("", "", "", "6", "", ""), Other = c(NaN, NaN,
NaN, 1.6, NaN, NaN), Total.yield..Quintal. = c(6.33, 2.5, 5,
23.56, 3, 0), Gross.Income..Rs. = c(4695, 4750, 4612.5, 11531.67,
5700, 3800), Total.Expenditure..Rs. = c(1955, 1700, 2237, 3296.67,
1520, 2900), Net.Income..Rs. = c(2740, 3050, 2366.5, 235, 4180,
0), Area..acres..1 = c(1, 8, 2, 3, 5, 5), Cotton.1 = c("", "4.6",
"5", "", "2.33", "0"), Pigeon.pea.1 = c(NaN, 0.4, 0.5, 0.33,
0.2, 0), Soyabean.1 = c(NaN, 2, NaN, 0.23, 3, NaN), Sorghum.1 = c(4,
11, NaN, NaN, 4.5, 0), Other.1 = c(NaN, NaN, NaN, NaN, NaN, NaN
), Total.yield..Quintal..1 = c(4, 40, 11, 1.7, 15.1, 0), Gross.Income..Rs..1 = c(3200,
7603.12, 7612.5, 1013.33, 4705, 5980), Total.Expenditure..Rs..1 = c(1950,
3042.7, 2850, 1193.33, 2060, 4050), Net.Income..Rs..1 = c(1250,
4560.72, 4762.5, -150, 2645, 2852.5)), .Names = c("ID", "Area..acres.",
"Cotton", "Pigeon.pea", "Soyabean", "Sorghum", "Other", "Total.yield..Quintal.",
"Gross.Income..Rs.", "Total.Expenditure..Rs.", "Net.Income..Rs.",
"Area..acres..1", "Cotton.1", "Pigeon.pea.1", "Soyabean.1", "Sorghum.1",
"Other.1", "Total.yield..Quintal..1", "Gross.Income..Rs..1",
"Total.Expenditure..Rs..1", "Net.Income..Rs..1"), row.names = c(NA,
6L), class = "data.frame"), structure(list(ID = c(1, 2, 3, 4,
5, 6), Area..acres. = c(2, 1, 2, 6, 1, 1), Cotton = c(NaN, 6,
2, 1.75, NaN, NaN), Pigeon.pea = c(0.75, 2, NaN, 1, NaN, NaN),
Soyabean = c(4, NaN, 1.5, 2.75, 3, 3), Sorghum = c(1, NaN,
2, 1, NaN, NaN), Other = c(0.25, 0.1, NaN, NaN, NaN, NaN),
Total.yield..Quintal. = c(12, 8.1, 11, 18.5, 3, 3), Gross.Income..Rs. = c(6525,
3000, 5575, 4666.67, 3240, 3800), Total.Expenditure..Rs. = c(3785,
2290, 2270, 2450, 1950, 2900), Net.Income..Rs. = c(2740,
710, 3305, 2216.67, 1290, 900), Area..acres..1 = c(3, 2,
2, 5, 4, 5), Cotton.1 = c(2, 2.5, 2, NaN, 0.83, 3), Pigeon.pea.1 = c(0.75,
0.25, 1, NaN, 1, 0.62), Soyabean.1 = c("", "", "", "", "",
""), Sorghum.1 = c(4, 0.1, 4, 1.5, 2, 3), Other.1 = c(NaN,
0.1, NaN, NaN, NaN, NaN), Total.yield..Quintal..1 = c(9.5,
5.7, 7, 3, 7.5, 17.5), Gross.Income..Rs..1 = c(4433.33, 3500,
4275, 1275, 2900, 5980), Total.Expenditure..Rs..1 = c(3823.33,
3270, 2660, 2075, 2262.5, 3560), Net.Income..Rs..1 = c(610,
230, 1615, 800, 637.5, 2420)), .Names = c("ID", "Area..acres.",
"Cotton", "Pigeon.pea", "Soyabean", "Sorghum", "Other", "Total.yield..Quintal.",
"Gross.Income..Rs.", "Total.Expenditure..Rs.", "Net.Income..Rs.",
"Area..acres..1", "Cotton.1", "Pigeon.pea.1", "Soyabean.1", "Sorghum.1",
"Other.1", "Total.yield..Quintal..1", "Gross.Income..Rs..1",
"Total.Expenditure..Rs..1", "Net.Income..Rs..1"), row.names = c(NA,
6L), class = "data.frame"), structure(list(ID = c(1, 2, 3, 4,
5, 6), Area..acres. = c(2, 1.5, 2, 3, 2, 3), Cotton = c(NaN,
2, NaN, 2.66, NaN, NaN), Pigeon.pea = c(NaN, 0.33, 0.4, 0.53,
0.5, NaN), Soyabean = c(3.5, NaN, 2, NaN, 3, 2.66), Sorghum = c(NaN,
NaN, NaN, 0.25, 2, NaN), Other = c(NaN, NaN, NaN, 0.19, NaN,
NaN), Total.yield..Quintal. = c(NaN, 3.5, 4.8, 10.02, 11, 8),
Gross.Income..Rs. = c(4200, 4646.66, 5389, 6507.33, 6600,
3200), Total.Expenditure..Rs. = c("1670", "2060", "2385",
"2528.33", "3006.5", "1426.66"), Net.Income..Rs. = c(2530,
2586.66, 3004, 3979, 3592.5, 1773.34), Area..acres..1 = c(2,
1.5, 2, 7, 1.5, 7.5), Cotton.1 = c(2, 3.33, 3, 2.16, 2, 0.93
), Pigeon.pea.1 = c(1, 0.33, 0.4, 0.33, 0.26, 0.2), Soyabean.1 = c(NaN,
NaN, NaN, NaN, NaN, NaN), Sorghum.1 = c(NaN, NaN, NaN, 5,
NaN, 6), Other.1 = c(NaN, NaN, NaN, NaN, NaN, NaN), Total.yield..Quintal..1 = c(6,
5.5, 6.8, 20, 3.4, 14.5), Gross.Income..Rs..1 = c(5930, 7500,
6920, 5100, 4666, 2822.66), Total.Expenditure..Rs..1 = c(3225,
3400, 3610, 3654.28, 5600, 1754.66), Net.Income..Rs..1 = c(2705,
4100, 3310, 1445.72, -934, 1068)), .Names = c("ID", "Area..acres.",
"Cotton", "Pigeon.pea", "Soyabean", "Sorghum", "Other", "Total.yield..Quintal.",
"Gross.Income..Rs.", "Total.Expenditure..Rs.", "Net.Income..Rs.",
"Area..acres..1", "Cotton.1", "Pigeon.pea.1", "Soyabean.1", "Sorghum.1",
"Other.1", "Total.yield..Quintal..1", "Gross.Income..Rs..1",
"Total.Expenditure..Rs..1", "Net.Income..Rs..1"), row.names = c(NA,
6L), class = "data.frame"), structure(list(ID = c(1, 2, 3, 4,
5, 6), Area..acres. = c(2, 2, 2, 3, 2, 1), Cotton = c(NaN, 3,
2, NaN, NaN, 4), Pigeon.pea = c(NaN, NaN, 0.5, 0.5, 1.5, 1),
Soyabean = c(3.66, NaN, NaN, 3, 1, NaN), Sorghum = c(2, NaN,
NaN, 1, 2.5, NaN), Other = c(22, NaN, NaN, 0.17, NaN, NaN
), Total.yield..Quintal. = c(12.3, 6, 5, 14.3, 10, 5), Gross.Income..Rs. = c(6030,
6420, 5562, 8183.33, 7780, 11800), Total.Expenditure..Rs. = c(3192,
4080, 3350, 20530, 3240, 5130), Net.Income..Rs. = c(2838,
2340, 2212, 5653.33, 4540, 6670), Area..acres..1 = c(2, 1,
2, 8, 1, 5), Cotton.1 = c(2, 4, 2.5, 3, 5.8, 5), Pigeon.pea.1 = c(3,
NaN, 0.5, 0.7, NaN, 0.25), Soyabean.1 = c(1, NaN, NaN, NaN,
NaN, NaN), Sorghum.1 = c(NaN, NaN, NaN, 3.7, NaN, 2), Other.1 = c(NaN,
NaN, NaN, NaN, NaN, NaN), Total.yield..Quintal..1 = c(0,
4, 6, 28, 5.8, 23), Gross.Income..Rs..1 = c(8675, 9760, 6677.5,
7417.7, 13050, 10860), Total.Expenditure..Rs..1 = c(7700,
6750, 5425, 4112.5, 6300, 4870), Net.Income..Rs..1 = c(975,
3010, 1252.5, 3305.7, 4750, 5990)), .Names = c("ID", "Area..acres.",
"Cotton", "Pigeon.pea", "Soyabean", "Sorghum", "Other", "Total.yield..Quintal.",
"Gross.Income..Rs.", "Total.Expenditure..Rs.", "Net.Income..Rs.",
"Area..acres..1", "Cotton.1", "Pigeon.pea.1", "Soyabean.1", "Sorghum.1",
"Other.1", "Total.yield..Quintal..1", "Gross.Income..Rs..1",
"Total.Expenditure..Rs..1", "Net.Income..Rs..1"), row.names = c(NA,
6L), class = "data.frame"))

With couchbase, how to sum total by day

I am looking to total (sum) equipment used by day. The data i have is {job, toDate, fromDate, equipmentUsed}. Would mapreduce be the best and how would i do that with the "to" and "from" dates?
Here is some background. We have many projects. Many workorders for each projects. Workorders are by day and have inventory for that day. i want to sum the inventory for each day in a date range to see if we will run out of inventory.
I will post sample data shortly
{“project::100”: {“name”: “project one”}
,“project::101”: {“name”: “project two”}
,”workOrder::1000”: {“project”: “project::100”, “dateNeeded”: jan 1, “inventory”: [“equip1”: 2, “equip2”: 1, “equip3”: 3 , “equip4”: 4]}
,”workOrder::1001”: {“project”: “project::100”, “dateNeeded”: jan 2, “inventory”: [“equip1”: 1 , “equip2”: 2 , “equip3”: 1 , “equip4”: 4]}
,”workOrder::1002”: {“project”: “project::100”, “dateNeeded”: jan 4, “inventory”: [“equip1”: 1, “equip2”: 2, “equip3”: 3, “equip4”: 1 ]}
,”workOrder::1000”: {“project”: “project::101”, “dateNeeded”: jan 1, “inventory”: [“equip1”: 1, “equip2”: 3, “equip4”: 1]}
,”workOrder::1001”: {“project”: “project::101”, “dateNeeded”: jan 3, “inventory”: [ “equip2”: 1, “equip3”: 3 , “equip4”: 1]}
,”workOrder::1002”: {“project”: “project::101”, “dateNeeded”: jan 4, “inventory”: [“equip1”: 1, “equip2”: 1, “equip3”: 2 , “equip4”: 3]}
}
Can you give an example of what exactly you want? Looks like you want to consider aggregating equipUsed for the overlapping dates, as well as gaps in the date ranges etc. For ex:
{J1, Jan 7, Jan 1, 4},
{J2, Jan 4, Jan 2, 7},
{J3, Jan 10, Jan 5, 10},
{J4, Jan 25, Jan 15, 20} etc.,
The output is:
{Jan 1, 4}, {Jan2, 11 /4 + 7/}, {Jan 3, 11}, {Jan4, 11}, {jan 5, 14 /4+10/}, {Jan 6, 14}, {Jan 7, 14}, {Jan 8, 10}, {Jan 9, 10}, {Jan 10, 10}, {Jan 11 to 14th, 0}, and {Jan 15th to 25th, 20} etc.,
This is some non-trivial logic. You can use N1QL API with some programming language (java, python, node etc) to solve this. For example, an exhaustive algorithm in pseudo code is (assuming docs in 'default' bucket):
minDate = Run_N1QLQuery("SELECT MIN(fromDate} from default");
maxDate = Run_N1QLQuery("SELECT MAX(toDate) from default");
for d = minDate to maxDate
sum_used = Run_N1QLQuery("SELECT SUM(equipUsed) from default WHERE %s BETWEEN fromDate AND toDate", d);
d = increment_date(d);
Depending on what is exactly needed, one can write much more efficient algorithm.
hth,
-Prasad

Subreport Table data scope issue SSRS

I'm trying to select data from other datasets to include in my table. This is the expression I have so far:
=iif(ReportItems!ID.Value=1
, (First(Fields!NumbBreaker.Value, "sp_Permit11"))
,iif(ReportItems!ID.Value=3
, (First(Fields!MoldProd.Value, "sp_PermitASMoldProd"))
,iif(ReportItems!ID.Value =4
, (First(Fields!MoldProd.Value, "sp_PermitASMoldProd"))
,iif(ReportItems!ID.Value =5
, (First(Fields!Thermal.Value, "sp_PermitThermalSand"))
,iif(ReportItems!ID.Value=6
, ((First(Fields!Steel20T.Value, "sp_Permit11")) + (First(Fields!Steel9T.Value, "sp_Permit11")) + (First(Fields!Ductile.Value, "sp_Permit11")))
,iif(ReportItems!ID.Value=7
, ((First(Fields!Steel20T.Value, "sp_Permit11")) + (First(Fields!Steel9T.Value, "sp_Permit11")))
,iif(ReportItems!ID.Value=8
, (First(Fields!IMF.Value, "sp_Permit11"))
,iif(ReportItems!ID.Value=9
, (First(Fields!Ductile.Value, "sp_Permit11"))
,iif(ReportItems!ID.Value = 10
, (First(Fields!DM1.Value, "sp_PermitDM1"))
,iif(ReportItems!ID.Value = 12
, (First(Fields!Zircon.Value, "sp_PermitZircon"))
,iif(ReportItems!ID.Value = 14
, (First(Fields!CMN.Value, "sp_PermitCMN"))
,iif(ReportItems!ID.Value = 15
, (First(Fields!A270.Value, "sp_Permit270"))
,iif(ReportItems!ID.Value= 16
, (First(Fields!A290.Value, "sp_Permit290"))
,iif(ReportItems!ID.Value = 17
, (First(Fields!CM8.Value, "sp_PermitCM8"))
,iif(ReportItems!ID.Value = 20
, (First(Fields!NT.Value, "sp_PermitNT")),"")))))))))))))))
How can I do this without using the First? The first is only bringing in the first value but without it I get:
Report item expressions can only refer to fields within the current
dataset scope or, if inside an aggregate, the specified dataset scope
You can't select data from another dataset without specifying which data you want. This is why you need the First function - it is the easiest way to specify what to select from the other dataset.
However, there are other ways to select the data that you want. I'll assume that each dataset has the unique ID field so we can use the Lookup function. Also, let's change from using IIF and use Switch instead as it is a bit more convenient for this sort of statement.
Now your expression will look like this:
=Switch(
ReportItems!ID.Value = 1, Lookup(Fields!ID.Value, Fields!ID.Value, Fields!NumbBreaker.Value, "sp_Permit11"),
ReportItems!ID.Value = 3, Lookup(Fields!ID.Value, Fields!ID.Value, Fields!MoldProd.Value, "sp_PermitASMoldProd"),
ReportItems!ID.Value = 4, Lookup(Fields!ID.Value, Fields!ID.Value, Fields!MoldProd.Value, "sp_PermitASMoldProd"),
ReportItems!ID.Value = 5, Lookup(Fields!ID.Value, Fields!ID.Value, Fields!Thermal.Value, "sp_PermitThermalSand"),
ReportItems!ID.Value = 6, Lookup(Fields!ID.Value, Fields!ID.Value, Fields!Steel20T.Value, "sp_Permit11") + Lookup(Fields!ID.Value, Fields!ID.Value, Fields!Steel9T.Value, "sp_Permit11") + Lookup(Fields!ID.Value, Fields!ID.Value, Fields!Ductile.Value, "sp_Permit11"),
ReportItems!ID.Value = 7, Lookup(Fields!ID.Value, Fields!ID.Value, Fields!Steel20T.Value, "sp_Permit11") + Lookup(Fields!ID.Value, Fields!ID.Value, Fields!Steel9T.Value, "sp_Permit11"),
ReportItems!ID.Value = 8, Lookup(Fields!ID.Value, Fields!ID.Value, Fields!IMF.Value, "sp_Permit11"),
ReportItems!ID.Value = 9, Lookup(Fields!ID.Value, Fields!ID.Value, Fields!Ductile.Value, "sp_Permit11"),
ReportItems!ID.Value = 10, Lookup(Fields!ID.Value, Fields!ID.Value, Fields!DM1.Value, "sp_PermitDM1"),
ReportItems!ID.Value = 12, Lookup(Fields!ID.Value, Fields!ID.Value, Fields!Zircon.Value, "sp_PermitZircon"),
ReportItems!ID.Value = 14, Lookup(Fields!ID.Value, Fields!ID.Value, Fields!CMN.Value, "sp_PermitCMN"),
ReportItems!ID.Value = 15, Lookup(Fields!ID.Value, Fields!ID.Value, Fields!A270.Value, "sp_Permit270"),
ReportItems!ID.Value = 16, Lookup(Fields!ID.Value, Fields!ID.Value, Fields!A290.Value, "sp_Permit290"),
ReportItems!ID.Value = 17, Lookup(Fields!ID.Value, Fields!ID.Value, Fields!CM8.Value, "sp_PermitCM8"),
ReportItems!ID.Value = 20, Lookup(Fields!ID.Value, Fields!ID.Value, Fields!NT.Value, "sp_PermitNT"),
True, "")
So we look up the value we want in the other dataset based on a unique key in the current dataset and return whichever field we want from the other dataset. The True, "" bit at the end is effectively the else condition - it is what is returned if no other condition is met.

Not sure how to loop in SQL over this data?

I'm trying to forumlate a query which will give me this result:
For each quiz number list the quiz number and the average high score. Only include the quizzes that more than 10 students took.
The query I am formulating looks like this:
SELECT QuizNum, AVG(HighScore) from quizzes WHERE NumStudents > 10;
This, however, gives me :
1, 20.75000
Which is incorrect data. I just really don't know where to start with this one.
The table looks like this:
'QuizNum', 'decimal(2,0)', 'NO', 'PRI', '0', ''
'QuizDate', 'date', 'NO', 'PRI', '0000-00-00', ''
'HighScore', 'decimal(3,1)', 'YES', '', '', ''
'LowScore', 'decimal(3,1)', 'YES', '', '', ''
'AvgScore', 'decimal(3,1)', 'YES', '', '', ''
'NumStudents', 'int(11)', 'YES', '', '', ''
'NumPassing', 'int(11)', 'YES', '', '', ''
So, foe example, every date quiz one occured has more than 10 students, so I need its average. Then I need the next quiz.
Table Contents:
1, '2009-01-25', 20.0, 9.0, 12.5, 15, 10
1, '2009-06-15', 30.0, 22.0, 25.6, ,
1, '2009-08-25', , , , 15, 10
1, '2010-01-24', 20.0, 9.0, 12.5, 17, 14
1, '2010-06-14', 28.5, 21.0, 26.6, 25, 25
1, '2010-08-24', , , , 21, 18
2, '2009-03-06', 18.0, 10.5, 15.0, 15, 12
2, '2009-07-01', 28.5, 18.5, 23.4, ,
2, '2009-09-21', 18.0, 10.5, 15.0, 15, 12
2, '2010-03-05', 18.5, 11.5, 15.2, 17, 14
2, '2010-06-30', 30.0, 25.0, 27.4, 23, 23
2, '2010-09-20', , , , 22, 19
3, '2009-03-24', 19.0, 14.5, 17.8, 13, 13
3, '2009-08-01', 27.5, 25.0, 16.2, ,
3, '2009-10-12', 19.0, 14.5, 17.8, 13, 13
3, '2010-03-23', 20.0, 17.0, 18.6, 16, 16
3, '2010-07-31', , , , 23, 20
3, '2010-10-11', 20.0, 9.0, 13.8, 22, 17
4, '2009-04-14', 20.0, 15.5, , ,
4, '2009-11-22', 20.0, 15.5, 17.9, ,
4, '2010-04-13', 20.0, 12.5, , ,
4, '2010-11-21', 20.0, 7.5, 13.9, 20, 15
5, '2009-05-04', 17.0, 8.5, 10.7, 10, 7
5, '2009-12-09', 17.0, 8.5, 10.7, 10, 7
5, '2010-04-03', 19.5, 11.5, 15.7, 15, 13
5, '2010-12-08', 20.0, 15.0, 17.3, 18, 18
Ideas?
For each quiz number
You need a GROUP BY:
SELECT QuizNum, AVG(HighScore)
FROM quizzes
WHERE NumStudents > 10
GROUP BY QuizNum;
You have to use GROUP BY clause like
SELECT QuizNum,
AVG(HighScore)
FROM quizzes
WHERE NumStudents > 10
GROUP BY QuizNum;

SQLQuery with Paste in R & RODBC

I have a problem with RODBC;this is the error :
chargerExp("C:\\test.csv",NE=1,NC=1,s=1)
Exeperience: 1 Execution: 1 Sujet: 1
> ajouter(new ("BDD"),new("Exp"))
[1] "42000 1064 [MySQL][ODBC 5.2(a) Driver][mysqld-5.6.17]You have an error in your SQL syntax;
check the manual that corresponds to your MySQL server version for the right syntax to use near ' , ' NA/NA/NA/ ' , ' NA/NA/NA/ ' , ' NA/NA/NA/ ' , ' NA/NA/NA/ ' , ' NA/NA/NA/' at line 1"
[2] "[RODBC] ERROR: Could not SQLExecDirect 'INSERT INTO `test` (`NE`, `NC`, `E`, `X`, `Y`,
`Z`, `T`, `A`, `S`) VALUES ( , , ' NA/NA/NA/ ' , ' NA/NA/NA/ ' , ' NA/NA/NA/ ' , ' NA/NA/NA/ ' , ' NA/NA/NA/ ' , , );'"
I'm using Mysql, RODBC and Rstudio with a method ajouter to insert in the data base
setMethod( f ="ajouter",signature =c(x="BDD",obj="Exp"),
def = function(x, obj)
{
channel <- odbcConnect(dsn="RSQL",uid="root",pwd="toor")
ne <- obj["ne"]
nc <- obj["nc"]
s <- obj["S"]
e<- encoder((obj["E"]))
x <- encoder((obj["X"]))
y <- encoder((obj["Y"]))
z <- encoder((obj["Z"]))
t<- encoder((obj["T"]))
a <- (obj["A"])
requeteSql.valeur <- paste("'",e,"'",",",
"'",x,"'",",",
"'",y,"'",",",
"'",z,"'",",",
"'",t,"'",",")
requetesql <- paste("INSERT INTO `test` (`NE`, `NC`, `E`, `X`, `Y`, `Z`, `T`,
`A`, `S`) VALUES (",ne,",",nc,", ",requeteSql.valeur, a,", ",s,");")
sqlQuery(channel, requetesql)
}
)
This is encoder method. It's main purpose is to convert my object to a text with some concatenation.
setMethod( f ="encoder", signature ="Para",
def =function(x, i, j, value)
{
s <- as.character(x["val"][1])
for(i in 2:length(x["val"]))
{
s <- paste(s,x["val"][i],sep="/")
}
return(s)
}
)
The call of my methods :
ajouter(new ("BDD"), new("Exp"))
This is my table in data base :
CREATE TABLE `test` (
`idTest` int(11) NOT NULL AUTO_INCREMENT,
`NE` int(11) DEFAULT NULL,
`NU` int(11) DEFAULT NULL,
`E` text,
`X` text,
`Y` text,
`Z` text,
T` text,
`A` float DEFAULT NULL,
`S` int(11) DEFAULT NULL,
PRIMARY KEY (`idTest`)
) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8;
My problem is from sqlquery, it said that my syntax are not correct, I don't know why.
This is the the object I want to load into the database (from dput so copy/pasteable):
new("Para"
, ne= 1
, nc = 1
, E = new("E"
, val = c(-13, -11, -11, -11, -11, -9, -10, -12, -12, -12, -11, -10,
-8, -8, -8, -9, -9, -7, -9, -10, -9, -9, -9, -9, -11, -9, -7,
-7, -7, -7, -7, -8, -7, -7, -5, -5, -7, -8, -7, -5, -5, -6, -6,
-9, -35, -76, -96, -62, 38, 167, 251, 251, 248, 157, 94, 56,
)
)
, X = new("Para"
, val = c(115, 116, 114, 113, 113, 114, 115, 114, 114, 113, 112, 111,
113, 114, 114, 115, 115, 116, 115, 115, 114, 116, 114, 115, 114,
113, 114, 114, 114, 114, 114, 113, 113, 114, 114, 114, 114, 115,
114, 115, 115, 115, 114, 115, 116, 114, 114, 114, 116, 115, 113,
)
)
, Y = new("Para"
, val = c(10, 11, 9, 9, 10, 9, 10, 9, 11, 10, 11, 10, 11, 11, 10, 11,
10, 10, 11, 10, 9, 10, 11, 12, 11, 10, 11, 11, 11, 12, 11, 11,
11, 10, 11, 11, 10, 12, 10, 11, 11, 11, 11, 11, 11, 10, 11, 11,
10, 11, 11, 11, 10, 11, 11, 11, 10, 11, 10, 12, 11, 10, 10, 10,
)
)
, Z = new("Para"
, val = c(-42, -42, -44, -43, -42, -41, -42, -42, -42, -42, -42, -43,
-40, -41, -41, -40, -41, -43, -41, -41, -41, -41, -41, -40, -40,
-41, -40, -40, -41, -40, -42, -41, -41, -41, -41, -41, -43, -42,
-43, -42, -41, -42, -42, -40, -41, -42, -40, -41, -41, -41, -42,
)
, T = new("Para"
, val = c(25.2, 25.2, 25.2, 25.2, 25.2, 25.2, 25.2, 25.2, 25.2, 25.2,
25.2, 25.2, 25.2, 25.2, 25.2, 25.2, 25.2, 25.2, 25.2, 25.2, 25.2,
25.2, 25.2, 25.2, 25.2, 25.2, 25.2, 25.2, 25.2, 25.2, 25.2, 25.2,
25.2, 25.2, 25.2, 25.2, 25.2, 25.2, 25.2, 25.2, 25.2, 25.2, 25.2,
)
)
, A = 1L
, S = 1
)
I downloaded your code and there seem to be many problems, the lest of which is actually running the query you tried. The error tells you it tried to run the SQL command
INSERT INTO `test` (`NE`, `NC`, `E`, `X`, `Y`, `Z`, `T`, `A`, `S`)
VALUES ( , , ' NA/NA/NA/ ' , ' NA/NA/NA/ ' , ' NA/NA/NA/ ' ,
' NA/NA/NA/ ' , 'NA/NA/NA/ ' , , );'
And it should be clear that looks invalid. I don't think you can have emply values in an INSERT statement like that.
The blank values are from the lines
ne <- obj["ne"]
nc <- obj["nc"]
s <- obj["S"]
In your function definition, obj is your Exp (or Experimentation) and you simply haven't set any values for these slots yet. Since they are empty vectors, they do not render as anything in the paste statement.
The batman values (aka NA/NA/NA) are from the
e<- encoder((obj["E"]))
x <- encoder((obj["X"]))
y <- encoder((obj["Y"]))
lines. When the Signal class is empty (as it is when you create a new, empty object) the val slot is just an empty numeric vector. So when you go to encode it, you've set up a loop that's basically going
s <- NA
for(i in 2:1)
{
s <- paste(s,NA,sep="/")
}
which is resulting in that NA/NA/NA/ value. And really that loop in unnecessary, a simple
s <- paste(as.character(x["val"]), collapse="/")
would give you the string you want without the counting-backwards problem.
So really, running ajouter(new ("BDD"),new("Exp")) with empty objects like that doesn't make much sense. You really should test each of your classes first by themselves before putting them together like that. I can't believe you didn't get other errors or warnings before that. You should work on isolating your errors and testing each part individually.