Grails: Value out of sequence: expected mode to be OBJECT or ARRAY when writing - json

I'm trying to parse a JSON object using grails.converters.JSON, but this errors appears.
The code
def str = '{"a": "b"}'
def json = new JSON(str)
or
def map = [:]
map.a = "b"
def json = map as JSON
json = new JSON(json.toString())
are returning this following error:
2016-11-22 14:21:34.592 ERROR --- [nio-8080-exec-9] o.g.web.errors.GrailsExceptionResolver : JSONException occurred when processing request: [GET] /test/index
Value out of sequence: expected mode to be OBJECT or ARRAY when writing '{"a":"b"}' but was INIT. Stacktrace follows:
java.lang.reflect.InvocationTargetException: null
at org.grails.core.DefaultGrailsControllerClass$ReflectionInvoker.invoke(DefaultGrailsControllerClass.java:210)
at org.grails.core.DefaultGrailsControllerClass.invoke(DefaultGrailsControllerClass.java:187)
at org.grails.web.mapping.mvc.UrlMappingsInfoHandlerAdapter.handle(UrlMappingsInfoHandlerAdapter.groovy:90)
at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:963)
at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:897)
at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:970)
at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:861)
at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:846)
at org.springframework.boot.web.filter.ApplicationContextHeaderFilter.doFilterInternal(ApplicationContextHeaderFilter.java:55)
at org.grails.web.servlet.mvc.GrailsWebRequestFilter.doFilterInternal(GrailsWebRequestFilter.java:77)
at org.grails.web.filters.HiddenHttpMethodFilter.doFilterInternal(HiddenHttpMethodFilter.java:67)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: org.grails.web.converters.exceptions.ConverterException: org.grails.web.json.JSONException: Value out of sequence: expected mode to be OBJECT or ARRAY when writing '{"a":"b"}' but was INIT
at org.grails.web.converters.AbstractConverter.toString(AbstractConverter.java:111)
at grails3.TestController$$EQ3IkyX7.index(TestController.groovy:25)
... 14 common frames omitted
Caused by: org.grails.web.converters.exceptions.ConverterException: org.grails.web.json.JSONException: Value out of sequence: expected mode to be OBJECT or ARRAY when writing '{"a":"b"}' but was INIT
at grails.converters.JSON.value(JSON.java:193)
at grails.converters.JSON.render(JSON.java:119)
at org.grails.web.converters.AbstractConverter.toString(AbstractConverter.java:109)
... 15 common frames omitted
Caused by: org.grails.web.json.JSONException: Value out of sequence: expected mode to be OBJECT or ARRAY when writing '{"a":"b"}' but was INIT
at org.grails.web.json.JSONWriter.append(JSONWriter.java:142)
at org.grails.web.json.JSONWriter.value(JSONWriter.java:353)
at grails.converters.JSON.value(JSON.java:162)
... 17 common frames omitted
Grails version: 3.2.3
Java version: 1.8u45 and 1.8u111

This statement itself converts the map to JSON.
def val = [a: 101, b: '100'] as JSON
You don't need the do this json = new JSON(json.toString()) anymore

Related

How to program to create a map schema using springdoc

I want to get a map schema in components, e.g.
components:
schemas:
Response:
type: object
additionalProperties:
type: string
Here is my code:
#Bean
public OpenAPI customOpenAPI() {
MapSchema mySchema = new MapSchema();
mySchema.setAdditionalProperties(new StringSchema());
return new OpenAPI().components(new Components()
.addSchemas("Response", mySchema)
);
}
But it will raise an exception:
{"timestamp":"2022-11-24 14:23:54.686","component":"","level":"WARN","thread-id":"http-nio-8080-exec-1","logger":"org.springdoc.core.OpenAPIService","message":"Json Processing Exception occurred: Cannot deserialize value of type `java.lang.Boolean` from Object value (token `JsonToken.START_OBJECT`)
at [Source: UNKNOWN; byte offset: #UNKNOWN] (through reference chain: io.swagger.v3.oas.models.OpenAPI["components"]->io.swagger.v3.oas.models.Components["schemas"]->java.util.LinkedHashMap["Response"])"}
{"timestamp":"2022-11-24 14:23:54.711","component":"","level":"ERROR","thread-id":"http-nio-8080-exec-1","logger":"org.apache.catalina.core.ContainerBase.[Tomcat].[localhost].[/].[dispatcherServlet]","message":"Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is java.lang.NullPointerException] with root cause"}
java.lang.NullPointerException: null
at java.base/java.util.Objects.requireNonNull(Objects.java:208)
at org.springdoc.core.OpenAPIService.buildOpenAPIWithOpenAPIDefinition(OpenAPIService.java:520)
at org.springdoc.core.OpenAPIService.build(OpenAPIService.java:259)
at org.springdoc.api.AbstractOpenApiResource.getOpenApi(AbstractOpenApiResource.java:314)
at org.springdoc.webmvc.api.OpenApiResource.openapiYaml(OpenApiResource.java:156)
at org.springdoc.webmvc.api.OpenApiWebMvcResource.openapiYaml(OpenApiWebMvcResource.java:133)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:568)
If I changed the code to
#Bean
public OpenAPI customOpenAPI() {
MapSchema mySchema = new MapSchema();
mySchema.setAdditionalProperties(false); <-- I pass a boolean value instead of a schema
return new OpenAPI().components(new Components()
.addSchemas("Response", mySchema)
);
}
SpringDoc can generate the swagger doc successfully.
components:
schemas:
Response:
type: object
additionalProperties: false
Can you help to check what's wrong with my previous code ? Thanks.
I'm using spring-boot 2.7.5 and
org.springdoc/springdoc-openapi-webmvc-core 1.6.11

Apache Spark SQL get_json_object java.lang.String cannot be cast to org.apache.spark.unsafe.types.UTF8String

I am trying to read a json stream from an MQTT broker in Apache Spark with structured streaming, read some properties of an incoming json and output them to the console. My code looks like that:
val spark = SparkSession
.builder()
.appName("BahirStructuredStreaming")
.master("local[*]")
.getOrCreate()
import spark.implicits._
val topic = "temp"
val brokerUrl = "tcp://localhost:1883"
val lines = spark.readStream
.format("org.apache.bahir.sql.streaming.mqtt.MQTTStreamSourceProvider")
.option("topic", topic).option("persistence", "memory")
.load(brokerUrl)
.toDF().withColumn("payload", $"payload".cast(StringType))
val jsonDF = lines.select(get_json_object($"payload", "$.eventDate").alias("eventDate"))
val query = jsonDF.writeStream
.format("console")
.start()
query.awaitTermination()
However, when the json arrives I get the following errors:
Exception in thread "main" org.apache.spark.sql.streaming.StreamingQueryException: Writing job aborted.
=== Streaming Query ===
Identifier: [id = 14d28475-d435-49be-a303-8e47e2f907e3, runId = b5bd28bb-b247-48a9-8a58-cb990edaf139]
Current Committed Offsets: {MQTTStreamSource[brokerUrl: tcp://localhost:1883, topic: temp clientId: paho7247541031496]: -1}
Current Available Offsets: {MQTTStreamSource[brokerUrl: tcp://localhost:1883, topic: temp clientId: paho7247541031496]: 0}
Current State: ACTIVE
Thread State: RUNNABLE
Logical Plan:
Project [get_json_object(payload#22, $.id) AS eventDate#27]
+- Project [id#10, topic#11, cast(payload#12 as string) AS payload#22, timestamp#13]
+- StreamingExecutionRelation MQTTStreamSource[brokerUrl: tcp://localhost:1883, topic: temp clientId: paho7247541031496], [id#10, topic#11, payload#12, timestamp#13]
at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:300)
at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:189)
Caused by: org.apache.spark.SparkException: Writing job aborted.
at org.apache.spark.sql.execution.datasources.v2.WriteToDataSourceV2Exec.doExecute(WriteToDataSourceV2Exec.scala:92)
at org.apache.spark.sql.execution.SparkPlan.$anonfun$execute$1(SparkPlan.scala:131)
at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:155)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
at org.apache.spark.sql.execution.SparkPlan.getByteArrayRdd(SparkPlan.scala:247)
at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:296)
at org.apache.spark.sql.Dataset.collectFromPlan(Dataset.scala:3384)
at org.apache.spark.sql.Dataset.$anonfun$collect$1(Dataset.scala:2783)
at org.apache.spark.sql.Dataset.$anonfun$withAction$2(Dataset.scala:3365)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:78)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3365)
at org.apache.spark.sql.Dataset.collect(Dataset.scala:2783)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runBatch$15(MicroBatchExecution.scala:537)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:78)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runBatch$14(MicroBatchExecution.scala:533)
at org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken(ProgressReporter.scala:351)
at org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken$(ProgressReporter.scala:349)
at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:58)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.runBatch(MicroBatchExecution.scala:532)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runActivatedStream$2(MicroBatchExecution.scala:198)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken(ProgressReporter.scala:351)
at org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken$(ProgressReporter.scala:349)
at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:58)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runActivatedStream$1(MicroBatchExecution.scala:166)
at org.apache.spark.sql.execution.streaming.ProcessingTimeExecutor.execute(TriggerExecutor.scala:56)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.runActivatedStream(MicroBatchExecution.scala:160)
at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:279)
... 1 more
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 1 times, most recent failure: Lost task 0.0 in stage 1.0 (TID 8, localhost, executor driver): java.lang.ClassCastException: java.lang.String cannot be cast to org.apache.spark.unsafe.types.UTF8String
at org.apache.spark.sql.catalyst.expressions.BaseGenericInternalRow.getUTF8String(rows.scala:46)
at org.apache.spark.sql.catalyst.expressions.BaseGenericInternalRow.getUTF8String$(rows.scala:46)
at org.apache.spark.sql.catalyst.expressions.GenericInternalRow.getUTF8String(rows.scala:195)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:619)
at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
at org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.$anonfun$run$2(WriteToDataSourceV2Exec.scala:117)
at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1394)
at org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.run(WriteToDataSourceV2Exec.scala:116)
at org.apache.spark.sql.execution.datasources.v2.WriteToDataSourceV2Exec.$anonfun$doExecute$2(WriteToDataSourceV2Exec.scala:67)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:121)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:405)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:1887)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:1875)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:1874)
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1874)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:926)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:926)
at scala.Option.foreach(Option.scala:407)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:926)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2108)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2057)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2046)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:737)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061)
at org.apache.spark.sql.execution.datasources.v2.WriteToDataSourceV2Exec.doExecute(WriteToDataSourceV2Exec.scala:64)
... 34 more
Caused by: java.lang.ClassCastException: java.lang.String cannot be cast to org.apache.spark.unsafe.types.UTF8String
at org.apache.spark.sql.catalyst.expressions.BaseGenericInternalRow.getUTF8String(rows.scala:46)
at org.apache.spark.sql.catalyst.expressions.BaseGenericInternalRow.getUTF8String$(rows.scala:46)
at org.apache.spark.sql.catalyst.expressions.GenericInternalRow.getUTF8String(rows.scala:195)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:619)
at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
at org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.$anonfun$run$2(WriteToDataSourceV2Exec.scala:117)
at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1394)
at org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.run(WriteToDataSourceV2Exec.scala:116)
at org.apache.spark.sql.execution.datasources.v2.WriteToDataSourceV2Exec.$anonfun$doExecute$2(WriteToDataSourceV2Exec.scala:67)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:121)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:405)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
I am sending the JSON records using mosquitto broker and they look like this:
mosquitto_pub -m '{"eventDate": "2020-11-11T15:17:00.000+0200"}' -t "temp"
It seems that every strings coming from Bahir stream source provider raise this error. For instance the following code also raises this error :
spark.readStream
.format("org.apache.bahir.sql.streaming.mqtt.MQTTStreamSourceProvider")
.option("topic", topic).option("persistence", "memory")
.load(brokerUrl)
.select("topic")
.writeStream
.format("console")
.start()
It looks like Spark does not recognize strings coming from Bahir, maybe some kind of weird string class version issue. I've tried the following actions to make the code work:
setup java version to 8
upgrade spark version from 2.4.0 to 2.4.7
setup scala version to 2.11.12
use decode function with all possible encoding combinations instead of .cast(StringType) to transform column "payload" to String
use substring function on column "payload" to recreate a compatible String.
Finally, I got working code by recreating the string using constructor and dataset:
val lines = spark.readStream
.format("org.apache.bahir.sql.streaming.mqtt.MQTTStreamSourceProvider")
.option("topic", topic).option("persistence", "memory")
.load(brokerUrl)
.select("payload")
.as[Array[Byte]]
.map(payload => new String(payload))
.toDF("payload")
This solution is rather ugly but at least it works.
I believe that there is nothing wrong with the code provided in the question and I suspect a bug on Bahir or Spark side preventing Spark to handle String from Bahir source.

Spark SQL error read JSON file : java.lang.ClassNotFoundException: scala.collection.GenTraversableOnce$class

i am trying to read JSON file using Spark SQL in Java.
this is my code
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.sql.DataFrame;
import org.apache.spark.sql.SQLContext;
...
JavaSparkContext jsc = new JavaSparkContext(sparkConf);
SQLContext sqlContext = new SQLContext(jsc);
DataFrame df = sqlContext.jsonFile("~/test.json");
df.printSchema();
df.registerTempTable("test");
...
i made simple JSON "test.json", to make it simple:
{
"name": "myname"
}
and when i tried to run the code, it comes error message:
efg
17/03/30 10:02:26 INFO BlockManagerMasterEndpoint: Registering block manager 10.6.86.82:36824 with 1948.2 MB RAM, BlockManagerId(driver, 10.6.86.82, 36824)
17/03/30 10:02:26 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 10.6.86.82, 36824)
17/03/30 10:02:26 INFO StandaloneSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
Exception in thread "main" java.lang.NoClassDefFoundError: scala/collection/GenTraversableOnce$class
at org.apache.spark.sql.sources.CaseInsensitiveMap.<init>(ddl.scala:344)
at org.apache.spark.sql.sources.ResolvedDataSource$.apply(ddl.scala:219)
at org.apache.spark.sql.SQLContext.load(SQLContext.scala:697)
at org.apache.spark.sql.SQLContext.jsonFile(SQLContext.scala:572)
at org.apache.spark.sql.SQLContext.jsonFile(SQLContext.scala:553)
at sugi.kau.sparkonjava.SparkSQL.main(SparkSQL.java:32)
Caused by: java.lang.ClassNotFoundException: scala.collection.GenTraversableOnce$class
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 6 more
17/03/30 10:02:26 INFO SparkContext: Invoking stop() from shutdown hook
...
thanks
in the docs spark for the function jsonFile(String path):
Loads a JSON file (one object per line), returning the result as a DataFrame. (Note tha jsonFile is replaced by read().json())
so you should have an object per line and your source file should be like this :
{"name": "myname"}
{"name": "myname2"}
.....

Groovy JsonBuilder appending \u0000's to string

I'm trying to make a simple UDP socket server for a Unity3D game I'm making, and I've got it mostly working. I can send messages to it and read the messages. But when I'm trying to send the message back to the client (for testing purposes, at the moment), I get a BufferOverFlowException.
Before sending the data back, I'm converting it to json using groovy.json.JsonBuilder. The data has a very simple structure:
[data: "Hello World"]
But for whatever reason, JsonBuilder is building it as
{
"data": "Hello World\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\..."
}
the \u0000's go on for a while. Long enough to make my 1024 byte long ByteBuffer overflow.
This is the class that's responsible for sending the data back to the client:
import groovy.json.JsonBuilder
import groovy.transform.CompileStatic
import groovyx.gpars.actor.DynamicDispatchActor
import java.nio.ByteBuffer
import java.nio.channels.DatagramChannel
#CompileStatic
class SenderActor extends DynamicDispatchActor{
//takes message of type [data: Object, receiver: SocketAddress]
void onMessage(Map message){
println(message.data) //prints "Hello World"
def json = new JsonBuilder([data: message.data]).toString()
println("Sending: $json") //prints '{"data": "Hello World\u0000\u0000..."}'
def channel = DatagramChannel.open()
channel.connect(message.receiver as SocketAddress)
def buffer = ByteBuffer.allocate(1024)
buffer.clear()
buffer.put(json.getBytes())
buffer.flip()
channel.send(buffer, message.receiver as SocketAddress)
}
}
And this is the stack trace I get:
An exception occurred in the Actor thread Actor Thread 2
java.nio.BufferOverflowException
at java.nio.HeapByteBuffer.put(HeapByteBuffer.java:189)
at java.nio.ByteBuffer.put(ByteBuffer.java:859)
at Croquet.Actors.SenderActor.onMessage(SenderActor.groovy:28)
at Croquet.Actors.SenderActor$onMessage.call(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:45)
at Croquet.Actors.ProcessorActor$onMessage.call(Unknown Source)
at groovyx.gpars.actor.impl.DDAClosure$_createDDAClosure_closure1.doCall(DDAClosure.groovy:38)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:90)
at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:324)
at org.codehaus.groovy.runtime.metaclass.ClosureMetaClass.invokeMethod(ClosureMetaClass.java:292)
at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1016)
at groovy.lang.Closure.call(Closure.java:423)
at groovy.lang.Closure.call(Closure.java:439)
at groovyx.gpars.actor.AbstractLoopingActor.runEnhancedWithoutRepliesOnMessages(AbstractLoopingActor.java:293)
at groovyx.gpars.actor.AbstractLoopingActor.access$400(AbstractLoopingActor.java:30)
at groovyx.gpars.actor.AbstractLoopingActor$1.handleMessage(AbstractLoopingActor.java:93)
at groovyx.gpars.util.AsyncMessagingCore.run(AsyncMessagingCore.java:132)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
The data in question is encoded as UTF-8, if that helps.
This is the client code that is responsible for sending data to the server (written in C#):
void sendTestMessage(UdpClient udpClient, UdpState udpState){
byte[] data = Encoding.UTF8.GetBytes("Hello World");
udpClient.BeginSend(
data,
data.Length,
udpState.e, //IPEndPoint
result =>{
messageSent = true;
Debug.Log(string.Format("Message '{1}' Sent to {0}", udpState.e, Encoding.UTF8.GetString(data)));
udpClient.EndSend(result);
},
udpState);
}
I solved my problem by changing
def json = new JsonBuilder([data: message.data]).toString()
to
def json = new JsonBuilder([data: message.data]).toString().replace("\\u0000", "")

Google search with Retrofit

I am trying to learn Retrofit and want to use it for Google search. I have written following code:
public class RetrofitSample {
private static final String URL = "https://www.google.com";
interface Google {
#GET("/search?q=retrofit")
String search();
}
public static void main(String... args) {
RestAdapter restAdapter = new RestAdapter.Builder()
.setEndpoint(URL)
.build();
Google goog = restAdapter.create(Google.class);
String result = goog.search();
System.out.println(result);
}
}
When I run this code I am getting following error:
Exception in thread "main" retrofit.RetrofitError: com.google.gson.JsonSyntaxException: com.google.gson.stream.MalformedJsonException: Use JsonReader.setLenient(true) to accept malformed JSON at line 1 column 12 path $
at retrofit.RetrofitError.conversionError(RetrofitError.java:33)
at retrofit.RestAdapter$RestHandler.invokeRequest(RestAdapter.java:383)
at retrofit.RestAdapter$RestHandler.invoke(RestAdapter.java:240)
at $Proxy0.search(Unknown Source)
at RetrofitSample.main(RetrofitSample.java:25)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:134)
Caused by: retrofit.converter.ConversionException: com.google.gson.JsonSyntaxException: com.google.gson.stream.MalformedJsonException: Use JsonReader.setLenient(true) to accept malformed JSON at line 1 column 12 path $
at retrofit.converter.GsonConverter.fromBody(GsonConverter.java:67)
at retrofit.RestAdapter$RestHandler.invokeRequest(RestAdapter.java:367)
... 8 more
Caused by: com.google.gson.JsonSyntaxException: com.google.gson.stream.MalformedJsonException: Use JsonReader.setLenient(true) to accept malformed JSON at line 1 column 12 path $
at com.google.gson.Gson.assertFullConsumption(Gson.java:786)
at com.google.gson.Gson.fromJson(Gson.java:776)
at retrofit.converter.GsonConverter.fromBody(GsonConverter.java:63)
... 9 more
Caused by: com.google.gson.stream.MalformedJsonException: Use JsonReader.setLenient(true) to accept malformed JSON at line 1 column 12 path $
at com.google.gson.stream.JsonReader.syntaxError(JsonReader.java:1573)
at com.google.gson.stream.JsonReader.checkLenient(JsonReader.java:1423)
at com.google.gson.stream.JsonReader.doPeek(JsonReader.java:546)
at com.google.gson.stream.JsonReader.peek(JsonReader.java:429)
at com.google.gson.Gson.assertFullConsumption(Gson.java:782)
... 11 more
Do I have to use Google search API (costum search) or is Google search returning som other type as result?
Thanks in advance.