Incorrect syntax of Chisel - chisel

I am learning chisel3.
I want to run the code in order to see the result.
Here is my code.
I command "sbt run" and show these errors.
It seems that I use ".W" to define width is illegal in Chisel,but the user guide obviously tells me that it's correct.
My build.sbt is created like this.
scalaVersion := "2.11.8"
resolvers ++= Seq( Resolver.sonatypeRepo("snapshots"),
Resolver.sonatypeRepo("releases") )
libraryDependencies += "edu.berkeley.cs" %% "chisel3" % "3.0-SNAPSHOT"
What is the problem?
Thanks in advance.

It seems that the current SNAPSHOT does not include the most up-to-date version of Chisel. I apologize for the published snapshot being behind the documentation. We are working on getting an updated snapshot out now.
EDIT: I believe the published version is now up to date, try sbt clean (to remove the old snapshot) and then sbt run.

Related

Getting incompatible jackson version while reading a file in spark scala

I am trying to read a simple json file using spark scala using below code
val data =spark.read.option("multiLine",true).json(jsonpath)
However I am getting error while reading it
Caused by: com.fasterxml.jackson.databind.JsonMappingException: Incompatible Jackson version: 2.11.2
Here is the sbt file
"org.apache.spark" %% "spark-core" % "2.4.0",
"org.apache.spark" %% "spark-sql" % "2.4.0",
"org.apache.spark" %% "spark-streaming" % "2.4.0",
"com.fasterxml.jackson.module" %% "jackson-module-scala" % "2.6.7.1",
"com.fasterxml.jackson.core" % "jackson-databind" % "2.6.7",
"com.fasterxml.jackson.core" % "jackson-core" % "2.6.7"
I have tried with different version. I am not able to find the compatible version for spark 2.4
Can someone please help me with this?
Spark comes with its own version of Jackson. And Jackson is not compatible between minor releases. Thus you cannot have a mix of Jackson version dependencies in your project.
The easiest way to avoid trouble is to use the same version as Spark and mark the dependency as Provided in your build definition.
For Spark 2.4.x, it's Jackson 2.6.x to be used. See for instance on https://mvnrepository.com/artifact/org.apache.spark/spark-core_2.12/2.4.8
EDIT: you should also make sure that you don't have another dependency that pulls another Jackson version. You can use SBT dependencyOverrides to achieve this.
Note that there's also the possibility to include another version of Jackson and force Spark to use it by using the "userClasspathFirst" property. I'll let the reader find information about this if needed.

Error while creating a custom producer in scala

I have written a small code for custom producer in Kafka using scala and it is giving the below error. I have attached the code in code section. I have attached some code for reference.
Name: Compile Error
Message: <console>:61: error: not found: type KafkaProducer
val producer = new KafkaProducer[String, String](props)
^
I think I need to import a relevant package. I tried importing the packages but could not get the correct one.
val producer = new KafkaProducer[String, String](props)
for( i <- 1 to 10) {
//producer.send(new ProducerRecord[String, String]("jin", "test",
"test"));
val record = new ProducerRecord("jin", "key", "the end ")
producer.send(record)
I can't install a scala kernel for jupyter right now, but based on this github you should add Kafka as a dependency, then the library might be recognized
%%configure -f
{
"conf": {
"spark.jars.packages": "org.apache.spark:spark-streaming_2.11:2.1.0,org.apache.bahir:spark-streaming-twitter_2.11:2.1.0,org.apache.spark:spark-streaming-kafka-0-8_2.10:2.1.0,com.google.code.gson:gson:2.4",
"spark.jars.excludes": "org.scala-lang:scala-reflect,org.apache.spark:spark-tags_2.11"
}
}
If this doesn't work, try downloading the whole notebook from the git, and fire it yourself, to see if something else is needed
#Arthur , Magic command %%configure -f did not work in jupyter notebook. I tried downloading the Whole notebook from the git but that also does not work. Luckily I was
reading the apache toree documentation for adding the dependencies and found a command %%addDeps. After putting dependencies in the below format into jupyter notebook,
I managed to run the code.
%AddDeps org.apache.kafka kafka-clients 1.0.0
%AddDeps org.apache.spark spark-core_2.11 2.3.0
Just for the information of others, when we compile the code using SBT, we need to comment this code from jupyter notebook as we will add these in build.sbt file.
Thanks Arthur for showing the direction !

NoClassDefFoundError with sbt and scala.swing

I'm new to JVM land so I apologize if this is a common problem. I'm using Scala (2.12) with sbt 0.13.13 on OSX.
I'm working on a tiny app that depends on the GUI library scala.swing (2.10.x). I ran into a runtime issue almost immediately with example code (http://otfried.org/scala/index_28.html).
Specifically, when invoking sbt run I get a stacktrace leading with:
[error] (run-main-0) java.lang.NoClassDefFoundError: scala/Proxy$class
java.lang.NoClassDefFoundError: scala/Proxy$class
at scala.swing.Window.<init>(Window.scala:25)
at scala.swing.Frame.<init>(RichWindow.scala:75)
at scala.swing.MainFrame.<init>(MainFrame.scala:19)
(Proxy appears to be a class/trait in the scala stdlib)
Reading on SO and elsewhere suggests this kind of exception is typically emitted when code present at compile time cannot be located subsequently at runtime. Indeed, the code compiles just fine, it is only when running the code that the problem occurs.
All suggestions I've found are to reconcile your classpath to resolve these issues. However, if the sbt console is to believed, my compile-time and run-time classpaths are identical:
> show compile:fullClasspath
[info] * Attributed(/Users/chris/Projects/thing2/target/scala-2.12/classes)
[info] * Attributed(/Users/chris/.ivy2/cache/org.scala-lang/scala-library/jars/scala-library-2.12.1.jar)
[info] * Attributed(/Users/chris/.ivy2/cache/org.scala-lang/scala-swing/jars/scala-swing-2.10.6.jar)
[success] Total time: 0 s, completed Dec 24, 2016 7:01:15 PM
> show runtime:fullClasspath
[info] * Attributed(/Users/chris/Projects/thing2/target/scala-2.12/classes)
[info] * Attributed(/Users/chris/.ivy2/cache/org.scala-lang/scala-library/jars/scala-library-2.12.1.jar)
[info] * Attributed(/Users/chris/.ivy2/cache/org.scala-lang/scala-swing/jars/scala-swing-2.10.6.jar)
[success] Total time: 0 s, completed Dec 24, 2016 7:01:19 PM
So, I find myself at a bit of a forensic impasse. Any suggestions on where to look next would be much appreciated. For clarity, this has only happened with scala.swing thus far. I have a couple other small projects in Scala that haven't had any issues. What's perplexing is the "missing" class seems to be scala standard lib material.
NoClassDefFoundError points to a problem where you mix libraries that were compiled for different major Scala versions. If you use Scala 2.12, you must also use the Swing module with a matching version. Before Scala 2.11, Swing has been published with an artifact like this:
"org.scala-lang" % "scala-swing" % scalaVersion.value
It was then moved to the org.scala-lang.modules group. Your build file should contain something like this:
scalaVersion := "2.12.1"
libraryDependencies += "org.scala-lang.modules" %% "scala-swing" % "2.0.0-M2"
(it seems the latest Scala 2.11 compatible version "1.0.2" has not been published for Scala 2.12, and so you need to jump straight to "2.0.0-M2" which should be mostly source compatible).

Kafka & Flink duplicate messages on restart

First of all, this is very similar to Kafka consuming the latest message again when I rerun the Flink consumer, but it's not the same. The answer to that question does NOT appear to solve my problem. If I missed something in that answer, then please rephrase the answer, as I clearly missed something.
The problem is the exact same, though -- Flink (the kafka connector) re-runs the last 3-9 messages it saw before it was shut down.
My Versions
Flink 1.1.2
Kafka 0.9.0.1
Scala 2.11.7
Java 1.8.0_91
My Code
import java.util.Properties
import org.apache.flink.streaming.api.windowing.time.Time
import org.apache.flink.streaming.api.scala._
import org.apache.flink.streaming.api.CheckpointingMode
import org.apache.flink.streaming.connectors.kafka._
import org.apache.flink.streaming.util.serialization._
import org.apache.flink.runtime.state.filesystem._
object Runner {
def main(args: Array[String]): Unit = {
val env = StreamExecutionEnvironment.getExecutionEnvironment
env.enableCheckpointing(500)
env.setStateBackend(new FsStateBackend("file:///tmp/checkpoints"))
env.getCheckpointConfig.setCheckpointingMode(CheckpointingMode.EXACTLY_ONCE)
val properties = new Properties()
properties.setProperty("bootstrap.servers", "localhost:9092");
properties.setProperty("group.id", "testing");
val kafkaConsumer = new FlinkKafkaConsumer09[String]("testing-in", new SimpleStringSchema(), properties)
val kafkaProducer = new FlinkKafkaProducer09[String]("localhost:9092", "testing-out", new SimpleStringSchema())
env.addSource(kafkaConsumer)
.addSink(kafkaProducer)
env.execute()
}
}
My SBT Dependencies
libraryDependencies ++= Seq(
"org.apache.flink" %% "flink-scala" % "1.1.2",
"org.apache.flink" %% "flink-streaming-scala" % "1.1.2",
"org.apache.flink" %% "flink-clients" % "1.1.2",
"org.apache.flink" %% "flink-connector-kafka-0.9" % "1.1.2",
"org.apache.flink" %% "flink-connector-filesystem" % "1.1.2"
)
My Process
(3 terminals)
TERM-1 start sbt, run program
TERM-2 create kafka topics testing-in and testing-out
TERM-2 run kafka-console-producer on testing-in topic
TERM-3 run kafka-console-consumer on testing-out topic
TERM-2 send data to kafka producer.
Wait for a couple seconds (buffers need to flush)
TERM-3 watch data appear in testing-out topic
Wait for at least 500 milliseconds for checkpointing to happen
TERM-1 stop sbt
TERM-1 run sbt
TERM-3 watch last few lines of data appear in testing-out topic
My Expectations
When there are no errors in the system, I expect to be able to turn flink on and off without reprocessing messages that successfully completed the stream in a prior run.
My Attempts to Fix
I've added the call to setStateBackend, thinking that perhaps the default memory backend just didn't remember correctly. That didn't seem to help.
I've removed the call to enableCheckpointing, hoping that perhaps there was a separate mechanism to track state in Flink vs Zookeeper. That didn't seem to help.
I've used different sinks, RollingFileSink, print(); hoping that maybe the bug was in kafka. That didn't seem to help.
I've rolled back to flink (and all connectors) v1.1.0 and v1.1.1, hoping that maybe the bug was in the latest version. That didn't seem to help.
I've added the zookeeper.connect config to the properties object, hoping that the comment about it only being useful in 0.8 was wrong. That didn't seem to help.
I've explicitly set the checkpointing mode to EXACTLY_ONCE (good idea drfloob). That didn't seem to help.
My Plea
Help!
(I've posted the same reply in the JIRA, just cross-posting the same here)
From your description, I'm assuming you're manually shutting down the job, and then resubmitting it, correct?
Flink does not retain exactly-once across manual job restarts, unless you use savepoints (https://ci.apache.org/projects/flink/flink-docs-master/setup/savepoints.html).
The exactly-once guarantee refers to when the job fails and then automatically restores itself from previous checkpoints (when checkpointing is enabled, like what you did with env.enableCheckpointing(500) )
What is actually happening is that the Kafka consumer is simply start reading from existing offsets committed in ZK / Kafka when you manually resubmitted the job. These offsets were committed to ZK / Kafka the first time you executed the job. They however are not used for Flink's exactly-once semantics; Flink uses internally checkpointed Kafka offsets for that. The Kafka consumer commits those offsets back to ZK simply to expose a measure of progress of the job consumption to the outside world (wrt Flink).
Update 2: I fixed the bug with the offset handling, it got merged in the current MASTER.
Update: Not an issue, use manual savepoints before canceling the job (thanks to Gordon)
I checked the logs and it seems like a bug in the offset handling. I filed a report under https://issues.apache.org/jira/browse/FLINK-4618.
I will update this answer when I got feedback.

Is Octave's pipe function supported on Windows7?

I am trying to use multicore-0.2.15 toolbox with Octave v3.6.4 on Windows7 64bit
( http://octave.sourceforge.net/multicore/ )
but even the demo script doesn't seem to work, it's not possible to create a pipe and I received an error message. So if I try to evaluate the following command in Octave
[read_fd, write_fd, err, msg] = pipe ()
I receive the following output:
read_fd = -1
write_fd = -1
err = -1
msg = pipe: not supported on this system
The fork function doesn't work either.
Does anyone have an idea what the problem might be?
Zoltan
The error message pipe: not supported on this system says it all. There is no support for pipe() in your system (Windows 7). You can:
Not use the multicore which you will notice is unmaintained (see the unmaintained section at the bottom of the package list). You could instead use the parallel package.
Try another build of Octave. Maybe the MinGW builds will work with pipes.
Try another version of Octave. Version 3.8.1 has already been released and if it is a problem on the side of Octave instead of Windows, maybe it has been fixed.
Change operating system (pipe() works fine in Debian)