I need to fetch custom options from proto and for that I need serviceDescriptor in golang. I am not using 'protoc-gen-go-grpc' to compile protos since it's an existing project and can't enforce everyone to use protoc-gen-go-grpc now. In existing generated .pb file, it is in an unexported format. Please help me with the way to get grpc.ServiceDesc value.
Related
In my karate tests i need to write response id's to txt files (or any other file format such as JSON), was wondering if it has any capability to do this, I haven't seen otherwise in the documentation. In the case of no, is there a simple JavaScript function to do so?
Try the karate.write(value, filename) API but we don't encourage it. Also the file will be written only to the current "build" directory which will be target for Maven projects / stand-alone JAR.
value can be any data-type, and Karate will write the bytes (or plain-text) out. There is no built-in support for any other format.
Here is an example.
EDIT: for others coming across this answer in the future the right thing to do is:
don't write files in the first place, you never need to do this, and this question is typically asked by inexperienced folks who for some reason think that the only way to "save" a response before validation is to write it to a file. No, please don't waste your time - and please just match against the response. You can save it (or parts of it) to variables while you make other HTTP requests. And do not write your tests so that scenarios (or features) depend on other scenarios, this is a very bad practice. Also note that by default, Karate will dump all HTTP requests and responses in the log file (typically in target/karate.log) and also in the HTML report.
see if karate.write() works for you as per this answer
write a custom Java (or JS function that uses the JVM) to do what you want using Java interop
Also note that you can use karate.toCsv() to convert JSON into CSV if needed.
My justification for writing to a file is a different one. I am using karate explicitly to implement a mock. I want to expose an endpoint wherein the upstream system will send some basic data through json payload using POST/PUT method and karate will construct the subsequent payload file and stores it the specific folder, and this newly created payload file will be exposed through another GET call.
I want to convert incoming JSON data from Kafka into a dataframe.
I am using structured streaming with Scala 2.12
Most people add a hard coded schema, but if the json can have additional fields, it requires changing the code base every-time, which is tedious.
One approach is to write it into a file and infer it with but I rather avoid doing that.
Is there any other way to approach this problem?
Edit: Found a way to turn a json string into a dataframe but cant extract it from the stream source, it is possible to extract it?
One way is to store the schema itself in the message headers (not in the key or value).
Though, this increases message size, it will be easy to parse the JSON value without the need for any external resource like a file or a schema registry.
New messages can have new schemas while at the same time old messages can still be processed using their old schema itself, because the schema is within the message itself.
Alternatively, you can version the schemas and include an id for every schema in the message headers (or) a magic byte in the key or value and infer the schema from there.
This approach is followed by Confluent Schema registry. It allows you to basically go through different versions of same schema and see how your schema has evolved over time.
Read the data as string and then convert it to map[string,String], this way you can process the any json without even knowing its schema
based on JavaTechnical answer , the best approach would be to use a schema registry and
avro data instead of json, there is no going around hardcoding a schema (for now).
include your schema name and id as a header and use them to read the schema from the schema registry.
use the from_avro fucntion to turn that data into a df!
I am a beginner in dart/flutter programming and I could use some help with using the json_serializable plugin:
I´ve set everything up according to this tutorial: https://flutter.io/json/
but the one thing that is never mentioned is how to load the .json file into a JSON string for the .decode method in the end.
I´ve seen a lot of people using an async function to load it, but I can´t get it to work and another question I have is how does the json_serializable load it? According to my research (https://pub.dartlang.org/packages/json_serializable) it produces a Dart field with the contents of a file containing JSON. Is it possible to also use this in the .decode method?
Would be great if someone could give me answers to my questions.
My program stack is ReactiveMongo 0.11.0, Scala 2.11.6, Play 2.4.2.
I'm adding PATCH functionality support to my Controllers. I want it to be type safe, so that PATCH would not mess the data in Mongo.
Current dirty solution of doing this, is
Reading object from Mongo first,
Performing JsObject.deepMerge with provided patch,
Checking that value can still be deserialized to target type.
Serializing merged object back to JsObject, and check, that patch contains only fields that are present in merged Json (So that there is no trash added to the stored object)
Call actual $set on mongo
This is obviously not perfect, but works fine. I would write macros to generate appropriate format generalization, but it might take too much time, which I currently lack of.
Is there a way to use Playframework Json macro generated format for partial entity validation like this?
Or any other solution, that can be easily integrated in Playframework for that matters.
With the help of #julien-richard-foy made a small library, to do exactly what I wanted.
https://github.com/clemble/scala-validator
Need to add some documentation, and I'll publish it to repository.
I am loading data from a mongodb collection to a mysql table through Kettle transformation.
First I extract them using MongodbInput and then I use json input step.
But since json input step has very low performance, I wanted to replace it with a
javacript script.
I am a beginner in Javascript and even though i tried somethings, the kettle javascript script is not recognizing any keywords.
can anyone give me sample code to convert Json data to different columns using javascript?
To solve your problem you need to see three aspects:
Reading from MongoDB
Reading from JSON
Reading from (probably) String
Reading from MongoDB Except if you changed the interface, MongoDB returns not JSON but BSON files (~binary JSON). You need to see the MongoDB documentation about reading and writing BSON: probably something like BSON.to() and BSON.from() but I don't know it by heart.
Reading from JSON Once you have your BSON in JSON format, you can read it using JSON.stringify() which returns a String.
Reading from (probably) String If you want to use the capabilities of JSON (why else would you use JSON?), you also want to use JSON.parse() which returns a JSON object.
My experience is that to send a JSON object from one step to the other, using a String is not a bad idea, i.e. at the end of a JavaScript step, you write your JSON object to a String and at the beginning of the next JavaScript step (can be further down the stream) you parse it back to JSON to work with it.
I hope this answers your question.
PS: writing JavaScript steps requires you to learn JavaScript. You don't have to be a master, but the basics are required. There is no way around it.
you could use the json input step to get the values of this json and put in common rows