Loading json into my unit test from a text file - json

I am working in AEM trying to get create txt files with JSON output so that I can load them into my unit test as strings and test my model / model processors. So far I have this...
public String readFile(String path, Charset encoding) throws IOException
{
byte[] encoded = Files.readAllBytes(Paths.get(path));
return new String(encoded, encoding);
}
private String sampleInput = readFile("/test/resources/map/sample-
input.txt",Charset.forName("UTF-8"));
I need sampleInput to take the json that is in 'sampleInput.txt' and convert it to a string. I am also running into issues with the Charset encoding.

I think the easiest way to manage JSON documents you use for unit testing is by keeping them organized in the classpath. Guava provides a neat wrapper for loading classpath resources.
import com.google.common.base.Charsets;
import com.google.common.io.Resources;
import java.io.IOException;
import java.net.URL;
public class TestJsonDocumentLoader {
public TestJsonDocumentLoader(Class clazz) {
this.clazz = clazz;
}
public String loadTestJson(String fileName) {
URL url = Resources.getResource(clazz, fileName);
try {
String data = Resources.toString(url, Charsets.UTF_8);
return data;
} catch (IOException e) {
throw new RuntimeException("Couldn't load a JSON file.", e);
}
}
}
This can then be used to load arbitrary JSON files placed in the same package as the test class. It is assumed that the files are UTF-8 encoded. I suggest keeping all sources encoded that way, regardless of the OS your team is using. It saves you a lot of trouble with version control.
Let's say you have MyTest in src/test/java/com/example/mytestsuite, then you could place a file data.json in src/test/resources/com/example/mytestsuite and load id by calling
TestJsonDocumentLoader loader = new TestJsonDocumentLoader(MyTest.class);
String jsonData = loader.loadTestJson("data.json");
String someOtherExample = loader.loadTestJson("other.json");
Actually, this could be used for all sorts of text files.

You could have also used object mapper from Jackson as an alternative
public class JsonResourceObjectMapper<T> {
private Class<T> model;
public JsonResourceObjectMapper(Class<T> model) {
this.model = model;
}
public T loadTestJson(String fileName) throws IOException{
ClassLoader classLoader = this.getClass().getClassLoader();
InputStream inputStream= classLoader.getResourceAsStream(fileName);
return new ObjectMapper().readValue(inputStream, this.model);
}
}
And then setup a fixture in the test passing a .class
private JsonClass json;
#Before
public void setUp() throws IOException {
JsonResourceObjectMapper mapper = new JsonResourceObjectMapper(JsonClass.class);
json = (JsonClass) mapper.loadTestJson("json/testJson.json");
}
Note that the testJson.json file is in resources/json folder same as what #toniedzwiedz mentioned
So then you could use the json model as:
#Test
public void testJsonNameProperty(){
//act
String name = json.getName();
// assert
assertEquals("testName", name);
}

Related

How to get a InputStream from S3client using aws java sdk?

Currently, I have code written in regular Java that gets a public-readable s3 object's InputStream and creates a thumbnail image.
Now I am looking to convert it to using Reactive Java using Project Reactor on Spring Webflux. The following is my code so far and I don't know how to convert it to a inpustream:
public ByteArrayOutputStream createThumbnail(String fileKey, String imageFormat) {
try {
LOG.info("fileKey: {}, endpoint: {}", fileKey, s3config.getSubdomain());
GetObjectRequest request = GetObjectRequest.builder()
.bucket(s3config.getBucket())
.key(fileKey)
.build();
Mono.fromFuture(s3client.getObject(request, new FluxResponseProvider()))
.map(fluxResponse -> new
ResponseInputStream(fluxResponse.sdkResponse, <ABORTABLE_INPUSTREAM?>))
I saw ResponseInputStream and I am thinking maybe that is the way to create a inputstream but I don't know what to put as AbortableInputStream in that constructor?
Is that even the way to create a inpustream?
Btw, I am using FluxResponseProvider from baeldung's documentation which is:
import reactor.core.publisher.Flux;
import software.amazon.awssdk.core.async.AsyncResponseTransformer;
import software.amazon.awssdk.core.async.SdkPublisher;
import software.amazon.awssdk.services.s3.model.GetObjectResponse;
import java.nio.ByteBuffer;
import java.util.concurrent.CompletableFuture;
class FluxResponseProvider implements AsyncResponseTransformer<GetObjectResponse,FluxResponse> {
private FluxResponse response;
#Override
public CompletableFuture<FluxResponse> prepare() {
response = new FluxResponse();
return response.cf;
}
#Override
public void onResponse(GetObjectResponse sdkResponse) {
this.response.sdkResponse = sdkResponse;
}
#Override
public void onStream(SdkPublisher<ByteBuffer> publisher) {
response.flux = Flux.from(publisher);
response.cf.complete(response);
}
#Override
public void exceptionOccurred(Throwable error) {
response.cf.completeExceptionally(error);
}
}
class FluxResponse {
final CompletableFuture<FluxResponse> cf = new CompletableFuture<>();
GetObjectResponse sdkResponse;
Flux<ByteBuffer> flux;
}
Any body know how to get a inpustream from s3 object in reactive java? I am using awssdk version 2.17.195.

Non-blocking parsing of a JSON String to a JsonNode

I'm exploring reactive programming with Spring Webflux and therefore, I'm trying to make my code completely nonblocking to get all the benefits of a reactive application.
Currently my code for the method to parse a Json String to a JsonNode to get specific values (in this case the elementId) looks like this:
public Mono<String> readElementIdFromJsonString(String jsonString){
final JsonNode jsonNode;
try {
jsonNode = MAPPER.readTree(jsonString);
} catch (IOException e) {
return Mono.error(e);
}
final String elementId = jsonNode.get("elementId").asText();
return Mono.just(elementId);
}
However, IntelliJ notifies me that I'm using an inappropriate blocking method call with this code:
MAPPER.readTree(jsonString);
How can I implement this code in a nonblocking way? I have seen that since Jackson 2.9+, it is possible to parse a Json String in a nonblocking async way, but I don't know how to use that API and I couldn't find an example how to do it correctly.
I am not sure why it is saying it is a blocking call since Jackson is non blocking as far as I know. Anyway one way to resolve this issue is to use schedulers if you do not want to use any other library. Like this.
public Mono<String> readElementIdFromJsonString(String input) {
return Mono.just(Mapper.readTree(input))
.map(it -> it.get("elementId").asText())
.onErrorResume( it -> Mono.error(it))
.subscribeOn(Schedulers.boundedElastic());
}
Something along that line.
import reactor.core.publisher.Mono;
import java.nio.charset.StandardCharsets;
import org.springframework.core.ResolvableType;
import org.springframework.core.io.buffer.DataBufferUtils;
import org.springframework.core.io.buffer.DefaultDataBuffer;
import org.springframework.core.io.buffer.DefaultDataBufferFactory;
import org.springframework.http.codec.json.AbstractJackson2Decoder;
import org.springframework.util.MimeType;
import org.springframework.util.MimeTypeUtils;
import com.fasterxml.jackson.databind.ObjectMapper;
#FunctionalInterface
public interface MessageParser<T> {
Mono<T> parse(String message);
}
public class JsonNodeParser extends AbstractJackson2Decoder implements MessageParser<JsonNode> {
private static final MimeType MIME_TYPE = MimeTypeUtils.APPLICATION_JSON;
private static final ObjectMapper OBJECT_MAPPER = allocateDefaultObjectMapper();
private final DefaultDataBufferFactory factory;
private final ResolvableType resolvableType;
public JsonNodeParser(final Environment env) {
super(OBJECT_MAPPER, MIME_TYPE);
this.factory = new DefaultDataBufferFactory();
this.resolvableType = ResolvableType.forClass(JsonNode.class);
this.setMaxInMemorySize(100000); // 1MB
canDecodeJsonNode();
}
#Override
public Mono<JsonNode> parse(final String message) {
final byte[] bytes = message.getBytes(StandardCharsets.UTF_8);
return decode(bytes);
}
private Mono<JsonNode> decode(final byte[] bytes) {
final DefaultDataBuffer defaultDataBuffer = this.factory.wrap(bytes);
return this.decodeToMono(Mono.just(defaultDataBuffer), this.resolvableType, MIME_TYPE, Map.of())
.ofType(JsonNode.class)
.subscribeOn(Schedulers.boundedElastic())
.doFinally((t) -> DataBufferUtils.release(defaultDataBuffer));
}
private void canDecodeJsonNode() {
if (!canDecode(this.resolvableType, MIME_TYPE)) {
throw new IllegalStateException(String.format("JsonNodeParser doesn't supports the given tar`enter code here`get " +
"element type [%s] and the MIME type [%s]", this.resolvableType, MIME_TYPE));
}
}
}

Hibernate export to csv

I want to export query result to excel or csv file.
I am using hibernate struts.
Is there any query like 'into outfile' which can directly export excel to specified location?
In MySQL database, 'into outfile' query works fine but in hibernate it is not working.
I tried using native sql but it gives error 'couldn't execute bulk manipulation query' and anyhow I can not solve that.
I am using MySQL database.
If you are writing an web app and using spring you can do it by writing data to an output stream
Write a simple class to construct your response
public class CsvResponse {
private final String filename;
private final List<YourPojo> records;
public CsvResponse(List<YourPojo> records, String filename) {
this.records = records;
this.filename = filename;
}
public String getFilename() {
return filename;
}
public List<YourPojo> getRecords() {
return records;
}
}
Now write a message converter to write them to an output stream
public class CsvMessageConverter extends AbstractHttpMessageConverter<CsvResponse> {
public static final MediaType MEDIA_TYPE = new MediaType("text", "csv", Charset.forName("UTF-8"));
public CsvMessageConverter() {
super(MEDIA_TYPE);
}
protected boolean supports(Class<?> clazz) {
return CsvResponse.class.equals(clazz);
}
protected void writeInternal(CsvResponse response, HttpOutputMessage output) throws Exception {
output.getHeaders().setContentType(MEDIA_TYPE);
output.getHeaders().set("Content-Disposition", "attachment; filename=\"" + response.getFilename() + "\"");
OutputStream out = output.getBody();
CsvWriter writer = new CsvWriter(new OutputStreamWriter(out), '\u0009');
List<YourPojo> allRecords = response.getRecords();
for (int i = 1; i < allRecords.size(); i++) {
YourPojo aReq = allRecords.get(i);
writer.write(aReq.toString());
}
writer.close();
}
}
Add this Message converter to your app context config file
<mvc:annotation-driven>
<mvc:message-converters register-defaults="true">
<bean class="com.yourpackage.CsvMessageConverter"/>
</mvc:message-converters>
</mvc:annotation-driven>
Finally the controller will look like
#RequestMapping(value = "/csvData", method = RequestMethod.GET, produces="text/csv")
#ResponseBody
public CsvResponse getFullData(HttpSession session) throws IOException {
// get data
List<YourPojo> allRecords = yourService.getData();
return new CsvResponse(allRecords, "yourData.csv");
}
I've found a similar way using JAX RS here.
But the bottomline is you'll have to use a REST mechanism to get data into the output stream if you want to do it in proper way but if your only target is to get data into a file you can just get your data in a list and then simply write it to a file.

Processing JSON using java Mapreduce

I am new to hadoop mapreduce
I have input text file where data has been stored as follow. Here are only a few tuples (data.txt)
{"author":"Sharīf Qāsim","book":"al- Rabīʻ al-manshūd"}
{"author":"Nāṣir Nimrī","book":"Adīb ʻAbbāsī"}
{"author":"Muẓaffar ʻAbd al-Majīd Kammūnah","book":"Asmāʼ Allāh al-ḥusná al-wāridah fī muḥkam kitābih"}
{"author":"Ḥasan Muṣṭafá Aḥmad","book":"al- Jabhah al-sharqīyah wa-maʻārikuhā fī ḥarb Ramaḍān"}
{"author":"Rafīqah Salīm Ḥammūd","book":"Taʻlīm fī al-Baḥrayn"}
This is my java file that I am supposed to write my code in (CombineBooks.java)
package org.hwone;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.util.GenericOptionsParser;
//TODO import necessary components
/*
* Modify this file to combine books from the same other into
* single JSON object.
* i.e. {"author": "Tobias Wells", "books": [{"book":"A die in the country"},{"book": "Dinky died"}]}
* Beaware that, this may work on anynumber of nodes!
*
*/
public class CombineBooks {
//TODO define variables and implement necessary components
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
String[] otherArgs = new GenericOptionsParser(conf, args)
.getRemainingArgs();
if (otherArgs.length != 2) {
System.err.println("Usage: CombineBooks <in> <out>");
System.exit(2);
}
//TODO implement CombineBooks
Job job = new Job(conf, "CombineBooks");
//TODO implement CombineBooks
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
My task is to create a Hadoop program in “CombineBooks.java”
returned in the “question-2” directory. The program should do
the following: Given the input author-book tuples, map-reduce
program should procude a JSON object which contains all the
books from same author in a JSON array, i.e.
{"author": "Tobias Wells", "books":[{"book":"A die in the country"},{"book": "Dinky died"}]}
Any idea how it can be done ?
First, the JSON objects you are trying to work with are not available for you. To solve this:
Go here and download as zip: https://github.com/douglascrockford/JSON-java
Extract to your sources folder in subdirectory org/json/*
Next, the first line of your code makes a package "org.json", which is incorrect, you shold create a separate package, for instance "my.books".
Third, using combiner here is useless.
Here's the code I ended up with, it works and solves your problem:
package my.books;
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
import org.apache.hadoop.util.GenericOptionsParser;
import org.json.*;
import javax.security.auth.callback.TextInputCallback;
public class CombineBooks {
public static class Map extends Mapper<LongWritable, Text, Text, Text>{
public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException{
String author;
String book;
String line = value.toString();
String[] tuple = line.split("\\n");
try{
for(int i=0;i<tuple.length; i++){
JSONObject obj = new JSONObject(tuple[i]);
author = obj.getString("author");
book = obj.getString("book");
context.write(new Text(author), new Text(book));
}
}catch(JSONException e){
e.printStackTrace();
}
}
}
public static class Reduce extends Reducer<Text,Text,NullWritable,Text>{
public void reduce(Text key, Iterable<Text> values, Context context) throws IOException, InterruptedException{
try{
JSONObject obj = new JSONObject();
JSONArray ja = new JSONArray();
for(Text val : values){
JSONObject jo = new JSONObject().put("book", val.toString());
ja.put(jo);
}
obj.put("books", ja);
obj.put("author", key.toString());
context.write(NullWritable.get(), new Text(obj.toString()));
}catch(JSONException e){
e.printStackTrace();
}
}
}
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
if (args.length != 2) {
System.err.println("Usage: CombineBooks <in> <out>");
System.exit(2);
}
Job job = new Job(conf, "CombineBooks");
job.setJarByClass(CombineBooks.class);
job.setMapperClass(Map.class);
job.setReducerClass(Reduce.class);
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(Text.class);
job.setOutputKeyClass(NullWritable.class);
job.setOutputValueClass(Text.class);
job.setInputFormatClass(TextInputFormat.class);
job.setOutputFormatClass(TextOutputFormat.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
Here's the folder structure of my project:
src
src/my
src/my/books
src/my/books/CombineBooks.java
src/org
src/org/json
src/org/json/zip
src/org/json/zip/BitReader.java
...
src/org/json/zip/None.java
src/org/json/JSONStringer.java
src/org/json/JSONML.java
...
src/org/json/JSONException.java
Here's the input
[localhost:CombineBooks]$ hdfs dfs -cat /example.txt
{"author":"author1", "book":"book1"}
{"author":"author1", "book":"book2"}
{"author":"author1", "book":"book3"}
{"author":"author2", "book":"book4"}
{"author":"author2", "book":"book5"}
{"author":"author3", "book":"book6"}
The command to run:
hadoop jar ./bookparse.jar my.books.CombineBooks /example.txt /test_output
Here's the output:
[pivhdsne:CombineBooks]$ hdfs dfs -cat /test_output/part-r-00000
{"books":[{"book":"book3"},{"book":"book2"},{"book":"book1"}],"author":"author1"}
{"books":[{"book":"book5"},{"book":"book4"}],"author":"author2"}
{"books":[{"book":"book6"}],"author":"author3"}
You can use on of the three options to put the org.json.* classes into your cluster:
Pack the org.json.* classes into your jar file (can easily be done using GUI IDE). This is the option I used in my answer
Put the jar file containing org.json.* classes on each of the cluster nodes into one of the CLASSPATH directories (see yarn.application.classpath)
Put the jar file containing org.json.* into HDFS (hdfs dfs -put <org.json jar> <hdfs path>) and use job.addFileToClassPath call for this jar file to be available for all of the tasks executing your job on the cluster. In my answer you should add job.addFileToClassPath(new Path("<jar_file_on_hdfs_location>")); to the main
Refer for splittable multi-line JSON:
https://github.com/alexholmes/json-mapreduce

How to serialize such a custom type to json with google-gson?

First, I have a very simple java bean which can be easily serialized to json:
class Node {
private String text;
// getter and setter
}
Node node = new Node();
node.setText("Hello");
String json = new Gson().toJson(node);
// json is { text: "Hello" }
Then in order to make such beans have some dynamic values, so I create a "WithData" base class:
Class WithData {
private Map<String, Object> map = new HashMap<String, Object>();
public void setData(String key, Object value) { map.put(key, value); }
public Object getData(String key) = { return map.get(key); }
}
class Node extends WithData {
private String text;
// getter and setter
}
Now I can set more data to a node:
Node node = new Node();
node.setText("Hello");
node.setData("to", "The world");
But Gson will ignore the "to", the result is still { text: "Hello" }. I expect it to be: { text: "Hello", to: "The world" }
Is there any way to write a serializer for type WithData, that all classes extend it will not only generate its own properties to json, but also the data in the map?
I tried to implement a custom serializer, but failed, because I don't know how to let Gson serialize the properties first, then the data in map.
What I do now is creating a custom serializer:
public static class NodeSerializer implements JsonSerializer<Node> {
public JsonElement serialize(Node src,
Type typeOfSrc, JsonSerializationContext context) {
JsonObject obj = new JsonObject();
obj.addProperty("id", src.id);
obj.addProperty("text", src.text);
obj.addProperty("leaf", src.leaf);
obj.addProperty("level", src.level);
obj.addProperty("parentId", src.parentId);
obj.addProperty("order", src.order);
Set<String> keys = src.getDataKeys();
if (keys != null) {
for (String key : keys) {
obj.add(key, context.serialize(src.getData(key)));
}
}
return obj;
};
}
Then use GsonBuilder to convert it:
Gson gson = new GsonBuilder().
registerTypeAdapter(Node.class, new NodeSerializer()).create();
Tree tree = new Tree();
tree.addNode(node1);
tree.addNode(node2);
gson.toJson(tree);
Then the nodes in the tree will be converted as I expected. The only boring thing is that I need to create a special Gson each time.
Actually, you should expect Node:WithData to serialize as
{
"text": "Hello",
"map": {
"to": "the world"
}
}
(that's with "pretty print" turned on)
I was able to get that serialization when I tried your example. Here is my exact code
import com.google.gson.Gson;
import com.google.gson.GsonBuilder;
import java.net.MalformedURLException;
import java.util.HashMap;
import java.util.Map;
public class Class1 {
public static void main(String[] args) throws MalformedURLException {
GsonBuilder gb = new GsonBuilder();
Gson g = gb.setPrettyPrinting().create();
Node n = new Node();
n.setText("Hello");
n.setData("to", "the world");
System.out.println(g.toJson(n));
}
private static class WithData {
private Map<String, Object> map = new HashMap<String, Object>();
public void setData(String key, Object value) { map.put(key, value); }
public Object getData(String key) { return map.get(key); }
}
private static class Node extends WithData {
private String text;
public Node() { }
public String getText() {return text;}
public void setText(String text) {this.text = text;}
}
}
I was using the JDK (javac) to compile - that is important because other compilers (those included with some IDEs) may remove the information on which Gson relies as part of their optimization or obfuscation process.
Here are the compilation and execution commands I used:
"C:\Program Files\Java\jdk1.6.0_24\bin\javac.exe" -classpath gson-2.0.jar Class1.java
"C:\Program Files\Java\jdk1.6.0_24\bin\java.exe" -classpath .;gson-2.0.jar Class1
For the purposes of this test, I put the Gson jar file in the same folder as the test class file.
Note that I'm using Gson 2.0; 1.x may behave differently.
Your JDK may be installed in a different location than mine, so if you use those commands, be sure to adjust the path to your JDK as appropriate.