JessRules - Iteration over an ArrayList - jess

there is my dilemna :
(defrule A15_test
?list <- (accumulate (bind ?ll (new java.util.ArrayList)) ;; initializer
(?ll add ?id) ;; action
?ll ;; result
(P (ID_Jess ?id) (m ?ref&:(neq ?ref nil)))
) ;; CE
(foreach ?l ?list
(printout t "l:" ?l " " crlf)
)
=>
(printout t "FIRE ! c:" (?list toString) " end. " crlf)
Which gives the following error :
Jess reported an error in routine Jesp.parsePattern.
Message: Bad slot value at token '('.
Program text: ( defrule A15_test ?list <- ( accumulate ( bind ?ll ( new java.util.ArrayList ) ) ( ?ll add ?id ) ?ll ( P ( ID_Jess ?id ) ( m ?ref & : ( neq ?ref nil ) ) ) ) ( foreach ?l ?list ( at line 80 in file <eval pipe>.
at jess.Jesp.error(Unknown Source)
at jess.Jesp.a(Unknown Source)
at jess.Jesp.a(Unknown Source)
at jess.Jesp.a(Unknown Source)
at jess.Jesp.a(Unknown Source)
at jess.Jesp.if(Unknown Source)
at jess.Jesp.parseDefrule(Unknown Source)
at jess.Jesp.parseExpression(Unknown Source)
at jess.Jesp.promptAndParseOneExpression(Unknown Source)
at jess.Jesp.parse(Unknown Source)
at jess.Rete.eval(Unknown Source)
at jess.Rete.eval(Unknown Source)
at jesslanguage.JessExecutor.executeJess(JessExecutor.java:30)
at Main.testJessRules(Main.java:293)
at Main.main(Main.java:62)
Clearly, the foreach doesn't like my ?list.
Note that if I remove it, the (foreach XXX), it all works fine and give :
FIRE ! c:[p1, p3, p4]
FIRE ! c:[p1, p3, p4]
(Which I'm try to group in ONE fire accumulating or counting on the elements of the list)
Anybody knows how to iterate over a Java ArrayList directly from Jess ?

Your iteration code is fine. The problem is you're including it in a place where only patterns (conditional elements) are allowed, not function calls. You need to move the "foreach" to the right-hand side of the rule.

Related

spark non json to json and to dataframe error

I have a json type file (not real json structure), but i converted to json and read through spark read json (we are in spark 1.6.0), i can't use multiline feature from spark 2 yet. It displays results , but at the same time it error out. Any help greatly appreciated.
I have document like this .. took just one example, but it is an array :
$result = [
{
'name' => 'R-2018:1583',
'issue_date' => '2018-05-17 02:51:06',
'type' => 'Product Enhancement Advisory',
'last_modified_date' => '2018-05-17 03:51:00',
'id' => 273,
'update_date' => '2018-05-17 02:51:06',
'synopsis' => ' enhancement update',
'advory' => 'R:1583'
}
]
I used like this:
jsonRDD = sc.wholeTextFiles("/user/xxxx/aa.json").map(lambda x: x[1]).map(lambda x:x.replace('$result =','')).map(lambda x: x.replace("'",'"')).map(lambda x:x.replace("\n","")).map(lambda x:x.replace("=>",":")).map(lambda x:x.replace(" ",""))
sqlContext.read.json(rdd).show()
It display the results, but I get the below error also, please help on this.
18/08/31 11:19:30 WARN util.ExecutionListenerManager: Error executing query execution listener
java.lang.ArrayIndexOutOfBoundsException: 0
at org.apache.spark.sql.query.analysis.QueryAnalysis$$anonfun$getInputMetadata$2.apply(QueryAnalysis.scala:121)
at org.apache.spark.sql.query.analysis.QueryAnalysis$$anonfun$getInputMetadata$2.apply(QueryAnalysis.scala:108)
at scala.collection.LinearSeqOptimized$class.foldLeft(LinearSeqOptimized.scala:111)
at scala.collection.immutable.List.foldLeft(List.scala:84)
at org.apache.spark.sql.query.analysis.QueryAnalysis$.getInputMetadata(QueryAnalysis.scala:108)
at com.cloudera.spark.lineage.ClouderaNavigatorListener.writeQueryMetadata(ClouderaNavigatorListener.scala:74)
at com.cloudera.spark.lineage.ClouderaNavigatorListener.onSuccess(ClouderaNavigatorListener.scala:54)
at org.apache.spark.sql.util.ExecutionListenerManager$$anonfun$onSuccess$1$$anonfun$apply$mcV$sp$1.apply(QueryExecutionListener.scala:100)
at org.apache.spark.sql.util.ExecutionListenerManager$$anonfun$onSuccess$1$$anonfun$apply$mcV$sp$1.apply(QueryExecutionListener.scala:99)
at org.apache.spark.sql.util.ExecutionListenerManager$$anonfun$org$apache$spark$sql$util$ExecutionListenerManager$$withErrorHandling$1.apply(QueryExecutionListener.scala:121)
at org.apache.spark.sql.util.ExecutionListenerManager$$anonfun$org$apache$spark$sql$util$ExecutionListenerManager$$withErrorHandling$1.apply(QueryExecutionListener.scala:119)
at scala.collection.immutable.List.foreach(List.scala:318)
at scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:32)
at scala.collection.mutable.ListBuffer.foreach(ListBuffer.scala:45)
at org.apache.spark.sql.util.ExecutionListenerManager.org$apache$spark$sql$util$ExecutionListenerManager$$withErrorHandling(QueryExecutionListener.scala:119)
at org.apache.spark.sql.util.ExecutionListenerManager$$anonfun$onSuccess$1.apply$mcV$sp(QueryExecutionListener.scala:99)
at org.apache.spark.sql.util.ExecutionListenerManager$$anonfun$onSuccess$1.apply(QueryExecutionListener.scala:99)
at org.apache.spark.sql.util.ExecutionListenerManager$$anonfun$onSuccess$1.apply(QueryExecutionListener.scala:99)
at org.apache.spark.sql.util.ExecutionListenerManager.readLock(QueryExecutionListener.scala:132)
at org.apache.spark.sql.util.ExecutionListenerManager.onSuccess(QueryExecutionListener.scala:98)
at org.apache.spark.sql.DataFrame.withCallback(DataFrame.scala:2116)
at org.apache.spark.sql.DataFrame.head(DataFrame.scala:1389)
at org.apache.spark.sql.DataFrame.take(DataFrame.scala:1471)
at org.apache.spark.sql.DataFrame.showString(DataFrame.scala:184)
at sun.reflect.GeneratedMethodAccessor55.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381)
at py4j.Gateway.invoke(Gateway.java:259)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:209)
at java.lang.Thread.run(Thread.java:745)
json function take the path of a json file as parameter, so you need to first save the json somewhere and then read this file.
Something like that should work
jsonRDD = sc.wholeTextFiles("/user/xxxx/aa.json")
.map(lambda x: x[1])
.map(lambda x:x.replace('$result =',''))
.map(lambda x: x.replace("'",'"'))
.map(lambda x:x.replace("\n",""))
.map(lambda x:x.replace("=>",":"))
.map(lambda x:x.replace(" ",""))
.saveAsTextFile("/user/xxxx/aa_transformed.json")
sqlContext.read.json(jsonRDD).show()

Why am I getting lifted/5.1 undefined error while using formlets?

I am having trouble with an error I keep getting while trying to use formlets in racket. It says:
; lifted/5.1: undefined;
; cannot reference an identifier before its definition
; in module: top-level
; internal name: lifted/5.1
Nothing in my code is named "lifted" or "5.1" or any combination of the two. I am thinking that this is something inside the black box of formlets that I'm running up against, but I don't know what.
; set up
(require web-server/servlet web-server/servlet-env)
(require web-server/formlets)
; body of program
(define simpleformlet
(formlet
(#%#
"Name: " ,{input-string . => . name}
"Number: " ,{input-int . => . number})
(name number)))
(define (start request)
(showpage request))
(define (showpage request)
(define (responsegen embed/url)
(response/xexpr
`(html
(head (title "App3"))
(body (h1 "Let's try this...")
(form
([action ,(embed/url actionhandler)]
[method "POST"])
,#(formlet-display simpleformlet)
(input ([type "submit"])))))))
(define (actionhandler request)
(define-values (name number)
(formlet-process simpleformlet request))
(response/xexpr
`(html
(head (title "Alright!"))
(body (h2 "name")
(p ,name)
(h2 "number")
(p ,number)))))
(send/suspend/dispatch responsegen))
; run it!
(serve/servlet start
#:servlet-regexp #rx""
#:servlet-path "/form")
I added the line
#lang racket
at the top of your program. Used "Determine language from source" in the lower, left corner of DrRacket. And ran your program. A browser window opened and I was able to enter a name and a number, before I got a sensible error.
I am using Racket version 6.12, so if you are using an older version of Racket, then upgrade and try again.

java.utilNoSuchElementException: None.get for Vec

Perhaps I'm going about something the wrong way. I have a number of buffers that need to get locked and unlocked as part of the behavior of a state machine. I thought it would be perfect to use a Vec of Reg to store the state from clock to clock and use a var Vec of wires to accumulate the state as the state machine goes about locking and unlocking things. Here is code similar to the code I wrote that breaks in the same way:
import Chisel._
class testvec extends Module
{
val io = new Bundle
{
val addr = Vec( 5, UInt( INPUT, 4 ) )
val enable = Bool( INPUT )
val in = Vec( 5, UInt( INPUT, 16 ) )
val out = Vec( 16, UInt( OUTPUT, 16 ) )
}
val latch = Vec( 16, Reg( init=UInt(0,16) ) )
var temp = Vec( 16, UInt(0,16) )
for( i <- 0 until 16 )
{
temp(i) := latch(i)
}
for( i <- 0 until 5 )
{
temp(io.addr(i)) := io.in(i)
}
for( i <- 0 until 16 )
{
io.out(i) := temp(i)
}
when( io.enable )
{
for( i <- 0 until 16 )
{
latch(i) := temp(i)
}
}
}
class testvec_Tests(c: testvec) extends Tester(c)
{
step( 1 )
}
object mainStub
{
def main( args: Array[String] ): Unit =
{
chiselMainTest( Array[String]("--backend", "c", // "--backend", "v",
"--compile", "--test", "--genHarness"),
() => Module( new testvec() ) )
{
c => new testvec_Tests( c )
}
}
}
Note that although this code merely has a simple loop, I need to get my combinatorial lock state at various points during the execution of the state machine each clock cycle, so that's why this simplification has those combinatorial states as the final output rather than the registers.
Here's the full text of the error message:
[info] Set current project to chisel
[info] Running mainStub
[error] (run-main-0) java.util.NoSuchElementException: None.get
java.util.NoSuchElementException: None.get
at scala.None$.get(Option.scala:347)
at scala.None$.get(Option.scala:345)
at Chisel.ROMData$$anonfun$3.apply(ROM.scala:90)
at Chisel.ROMData$$anonfun$3.apply(ROM.scala:90)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
at scala.collection.Iterator$class.foreach(Iterator.scala:750)
at scala.collection.immutable.RedBlackTree$TreeIterator.foreach(RedBlackTree.scala:468)
at scala.collection.MapLike$DefaultValuesIterable.foreach(MapLike.scala:206)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:245)
at scala.collection.AbstractTraversable.map(Traversable.scala:104)
at Chisel.ROMData.<init>(ROM.scala:90)
at Chisel.ROM.data$lzycompute(ROM.scala:72)
at Chisel.ROM.data(ROM.scala:72)
at Chisel.ROM.read(ROM.scala:77)
at Chisel.Vec.apply(Vec.scala:121)
at testvec$$anonfun$2.apply$mcVI$sp(testvec.scala:21)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:166)
at testvec.<init>(testvec.scala:19)
at mainStub$$anonfun$main$1$$anonfun$apply$1.apply(testvec.scala:47)
at mainStub$$anonfun$main$1$$anonfun$apply$1.apply(testvec.scala:47)
at Chisel.Module$.Chisel$Module$$init(Module.scala:65)
at Chisel.Module$.apply(Module.scala:50)
at mainStub$$anonfun$main$1.apply(testvec.scala:47)
at mainStub$$anonfun$main$1.apply(testvec.scala:47)
at Chisel.Driver$.execute(Driver.scala:101)
at Chisel.Driver$.apply(Driver.scala:41)
at Chisel.Driver$.apply(Driver.scala:64)
at Chisel.chiselMain$.apply(hcl.scala:63)
at Chisel.chiselMainTest$.apply(hcl.scala:76)
at mainStub$.main(testvec.scala:48)
at mainStub.main(testvec.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
[trace] Stack trace suppressed: run last compile:run for the full output.
java.lang.RuntimeException: Nonzero exit code: 1
at scala.sys.package$.error(package.scala:27)
[trace] Stack trace suppressed: run last compile:run for the full output.
[error] (compile:run) Nonzero exit code: 1
[error] Total time: 1 s, completed Mar 3, 2016 1:49:17 PM
Are you sure of your "Vec" declaration ?
According to documentation, Vec must be declared as following I think:
val io = new Bundle
{
val addr = Vec.fill(5) {UInt( INPUT, 4 )}
val enable = Bool( INPUT )
val in = Vec.fill( 5 ) {UInt( INPUT, 16 )}
val out = Vec.fill( 16 ) {UInt( OUTPUT, 16 )}
}

MongoDB: How to deal with invalid ObjectIDs

Here below is my code to find a document by ObjectID:
def find(selector: JsValue, projection: Option[JsValue], sort: Option[JsValue],
page: Int, perPage: Int): Future[Seq[JsValue]] = {
var query = collection.genericQueryBuilder.query(selector).options(
QueryOpts(skipN = page * perPage)
)
projection.map(value => query = query.projection(value))
sort.map(value => query = query.sort(value.as[JsObject]))
// this is the line where the call crashes
query.cursor[JsValue].collect[Vector](perPage).transform(
success => success,
failure => failure match {
case e: LastError => DaoException(e.message, Some(DATABASE_ERROR))
}
)
}
Now let's suppose we invoke this method with an invalid ObjectID:
// ObjectId 53125e9c2004006d04b605abK is invalid (ends with a K)
find(Json.obj("_id" -> Json.obj("$oid" -> "53125e9c2004006d04b605abK")), None, None, 0, 25)
The call above causes the following exception when executing query.cursor[JsValue].collect[Vector](perPage) in the find method:
Caused by: java.util.NoSuchElementException: JsError.get
at play.api.libs.json.JsError.get(JsResult.scala:11) ~[play-json_2.10.jar:2.2.1]
at play.api.libs.json.JsError.get(JsResult.scala:10) ~[play-json_2.10.jar:2.2.1]
at play.modules.reactivemongo.json.collection.JSONGenericHandlers$StructureBufferWriter$.write(jsoncollection.scala:44) ~[play2-reactivemongo_2.10-0.10.2.jar:0.10.2]
at play.modules.reactivemongo.json.collection.JSONGenericHandlers$StructureBufferWriter$.write(jsoncollection.scala:42) ~[play2-reactivemongo_2.10-0.10.2.jar:0.10.2]
at reactivemongo.api.collections.GenericQueryBuilder$class.reactivemongo$api$collections$GenericQueryBuilder$$write(genericcollection.scala:323) ~[reactivemongo_2.10-0.10.0.jar:0.10.0]
at reactivemongo.api.collections.GenericQueryBuilder$class.cursor(genericcollection.scala:342) ~[reactivemongo_2.10-0.10.0.jar:0.10.0]
at play.modules.reactivemongo.json.collection.JSONQueryBuilder.cursor(jsoncollection.scala:110) ~[play2-reactivemongo_2.10-0.10.2.jar:0.10.2]
at reactivemongo.api.collections.GenericQueryBuilder$class.cursor(genericcollection.scala:331) ~[reactivemongo_2.10-0.10.0.jar:0.10.0]
at play.modules.reactivemongo.json.collection.JSONQueryBuilder.cursor(jsoncollection.scala:110) ~[play2-reactivemongo_2.10-0.10.2.jar:0.10.2]
at services.common.mongo.MongoDaoComponent$MongoDao$$anon$1.find(MongoDaoComponent.scala:249) ~[classes/:na]
... 25 common frames omitted
Any idea? Thanks.

Error running Hive query with JSON data?

I have data containing the following:
{"field1":{"data1": 1},"field2":100,"field3":"more data1","field4":123.001}
{"field1":{"data2": 1},"field2":200,"field3":"more data2","field4":123.002}
{"field1":{"data3": 1},"field2":300,"field3":"more data3","field4":123.003}
{"field1":{"data4": 1},"field2":400,"field3":"more data4","field4":123.004}
I uploaded it to S3 and converted it to a Hive table using the following from the Hive console:
ADD JAR s3://elasticmapreduce/samples/hive-ads/libs/jsonserde.jar;
CREATE EXTERNAL TABLE impressions (json STRING ) ROW FORMAT DELIMITED LINES TERMINATED BY '\n' LOCATION 's3://my-bucket/';
The query:
SELECT * FROM impressions;
gives output fine but as soon as I try and use the get_json_object UDF
and run the query:
SELECT get_json_object(impressions.json, '$.field2') FROM impressions;
I get the following error:
> SELECT get_json_object(impressions.json, '$.field2') FROM impressions;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
java.io.IOException: cannot find dir = s3://nick.bucket.dev/snapshot.csv in pathToPartitionInfo: [s3://nick.bucket.dev/]
at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getPartitionDescFromPathRecursively(HiveFileFormatUtils.java:291)
at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getPartitionDescFromPathRecursively(HiveFileFormatUtils.java:258)
at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat$CombineHiveInputSplit.<init>(CombineHiveInputFormat.java:108)
at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getSplits(CombineHiveInputFormat.java:423)
at org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:1036)
at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1028)
at org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:172)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:944)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:897)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:897)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:871)
at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:479)
at org.apache.hadoop.hive.ql.exec.MapRedTask.execute(MapRedTask.java:136)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:133)
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:57)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1332)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1123)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:931)
at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:261)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:218)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:409)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:684)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:567)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
Job Submission failed with exception 'java.io.IOException(cannot find dir = s3://my-bucket/snapshot.csv in pathToPartitionInfo: [s3://my-bucket/])'
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MapRedTask
Does anyone know what is wrong?
Any reason why you are declaring:
ADD JAR s3://elasticmapreduce/samples/hive-ads/libs/jsonserde.jar;
But not using the serde in your table definition? See the code snippet below on how to use it. I can't see any reason to use get_json_object here.
CREATE EXTERNAL TABLE impressions (
field1 string, field2 string, field3 string, field4 string
)
ROW FORMAT
serde 'com.amazon.elasticmapreduce.JsonSerde'
with serdeproperties ( 'paths'='field1, field2, field3,
field4)
LOCATION 's3://mybucket' ;