Persist large json string into database entries? - json

I have a URL with a large number of entries that I want to persist into the MySQL database so that I can use my rest API to call on them. I have set up the whole structure as if im ready to use my own api, with entities, dtos, controllers, resources etc.
What I cannot figure out is, how do I persist json from the external api call into my database? Do I need to convert the json to entities before i persist them, or can I parse them directly into my database? And how would I go about persisting such a large json string with so many entries?
I've made an endpoint that calls the api, so that I can update the database on call, but I don't really know what to do next here
#PostMapping(value = "/popDB")
private itemEntity getItemObject() throws IOException {
URL url = new URL("api url string");
InputStream inputStream = url.openConnection().getInputStream();
ObjectMapper mapper = new ObjectMapper();
Map<String, Object> jsonMap = mapper.readValue(inputStream, Map.class);
articleService.save((ItemEntity) jsonMap);
return (ItemEntity) jsonMap;
}

Related

How to create a custom POJO for Apache Flink

I'm using Flink to process some JSON-format data coming from some Data Source.
For now, my process is quite simple: extract each element from the JSON-format data and print them into log file.
Here is my piece of code:
// create proper deserializer to deserializer the JSON-format data into ObjectNode
PravegaDeserializationSchema<ObjectNode> adapter = new PravegaDeserializationSchema<>(ObjectNode.class, new JavaSerializer<>());
// create connector to receive data from Pravega
FlinkPravegaReader<ObjectNode> source = FlinkPravegaReader.<ObjectNode>builder()
.withPravegaConfig(pravegaConfig)
.forStream(stream)
.withDeserializationSchema(adapter)
.build();
final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
DataStream<ObjectNode> dataStream = env.addSource(source).name("Pravega Stream");
dataStream.???.print();
Saying that the data coming from Pravega is like this: {"name":"titi", "age":18}
As I said, for now I simply need to extract name and age and print them.
So how could I do this?
As my understanding, I need to make some customized codes at ???. I might need to create a custom POJO class which contains ObjectNode. But I don't know how. I've read the official doc of Flink and also tried to google about how to create a custom POJO for Flink but I can't still figure out clearly.
Could you please show me an example?
Why don't You simply use something more meaningful instead of JavaSerializer? Perhaps something from here.
You could then create a POJO with the fields you want to use and simply deserialize JSON data to Your POJO instead of ObjectNode
Also, if there is some specific reason that You need to have ObjectNode on deserialization then You can simply do something like :
//I assume You have created the class named MyPojo
dataStream.map(new MapFunction<ObjectNode, MyPojo>() {
ObjectMapper mapper = new ObjectMapper();
#Override
public MyPojo map(final ObjectNode value) throws Exception {
mapper.readValue(value.asText(), MyPojo.class)
}
})

Unmarshalling with Jackson "The Json input stream must start with an array of Json objects"

I'm getting an error when unmarshalling files that only contain a single JSON object: "IllegalStateException: The Json input stream must start with an array of Json objects"
I can't find any workaround and I don't understand why it has to be so.
#Bean
public ItemReader<JsonHar> reader(#Value("file:${json.resources.path}/*.json") Resource[] resources) {
log.info("Processing JSON resources: {}", Arrays.toString(resources));
JsonItemReader<JsonHar> delegate = new JsonItemReaderBuilder<JsonHar>()
.jsonObjectReader(new JacksonJsonObjectReader<>(JsonHar.class))
.resource(resources[0]) //FIXME had to force this, but fails anyway because the file is "{...}" and not "[...]"
.name("jsonItemReader")
.build();
MultiResourceItemReader<JsonHar> reader = new MultiResourceItemReader<>();
reader.setDelegate(delegate);
reader.setResources(resources);
return reader;
}
I need a way to unmarshall single object files, what's the point in forcing arrays (which I won't have in my use case)??
I don't understand why it has to be so.
The JsonItemReader is designed to read an array of objects because batch processing is usually about handling data sources with a lot of items, not a single item.
I can't find any workaround
JsonObjectReader is what you are looking for: You can implement it to read a single json object and use it with the JsonItemReader (either at construction time or using the setter). This is not a workaround but a strategy interface designed for specific use cases like yours.
Definitely not ideal #thomas-escolan. As #mahmoud-ben-hassine pointed, ideal would be to code a custom reader.
In case some new SOF users stumble on this question, I leave here a code example on how to do it
Though this may not be ideal, this is how I handled the situation:
#Bean
public ItemReader<JsonHar> reader(#Value("file:${json.resources.path}/*.json") Resource[] resources) {
log.info("Processing JSON resources: {}", Arrays.toString(resources));
JsonItemReader<JsonHar> delegate = new JsonItemReaderBuilder<JsonHar>()
.jsonObjectReader(new JacksonJsonObjectReader<>(JsonHar.class))
.resource(resources[0]) //DEBUG had to force this because of NPE...
.name("jsonItemReader")
.build();
MultiResourceItemReader<JsonHar> reader = new MultiResourceItemReader<>();
reader.setDelegate(delegate);
reader.setResources(Arrays.stream(resources)
.map(WrappedResource::new) // forcing the bride to look good enough
.toArray(Resource[]::new));
return reader;
}
#RequiredArgsConstructor
static class WrappedResource implements Resource {
#Delegate(excludes = InputStreamSource.class)
private final Resource resource;
#Override
public InputStream getInputStream() throws IOException {
log.info("Wrapping resource: {}", resource.getFilename());
InputStream in = resource.getInputStream();
BufferedReader reader = new BufferedReader(new InputStreamReader(in, UTF_8));
String wrap = reader.lines().collect(Collectors.joining())
.replaceAll("[^\\x00-\\xFF]", ""); // strips off all non-ASCII characters
return new ByteArrayInputStream(("[" + wrap + "]").getBytes(UTF_8));
}
}

Sending a JSON in batches

I have some doubts on how to perform some tasks I use jackson to create a JSON, after I encrypt I need to do it sent to a service that will consume this JSON, the problem is that the file size (physical) is 3,571 KB and I need to send in batches of at most 1,000KB
each one, as I am newcomer with springBoot and web in general I saw that I have to do something called pagination, is that it?
I have a Dto (students) a class manager where I make access to the database that returns me a list of students
Then I create json, step to base 64 to finally configure the header and make the request
studentList= StudantManager.getAllStudants(con);
int sizeRecords = studentList.size();
try {
students= useful.convertToJson(studentList);
studentsWithSecurity = useful.securityJson(students);
} catch (JsonProcessingException e) {
log.error(e.toString());
}
RestTemplate restTemplate = new RestTemplate();
String url = "myRestService";
HttpHeaders headers;
headers=getHeaders(sizeRecords,students);
headers.setContentType(MediaType.APPLICATION_JSON);
HttpEntity<String> entity = new HttpEntity<>(studentsWithSecurity, headers);
String answer = restTemplate.postForObject(url, entity, String.class);
Taking advantage of my current code, how can I create a solution that solves the upload problem that I mentioned above?

How to directly convert MongoDB Document do Jackson JsonNode in Java

I would like to store a MongoDB Document (org.bson.Document) as a Jackson JsonNode file type. There is a outdated answer to this problem here, inspired by this I was able to succesfully parse the Document with
ObjectMapper mapper = new ObjectMapper();
...
JonNode jsonData = mapper.readTree(someBsonDocument.toJson());
In my understanding this will:
Convert the Document to string
Parse the string and create a JsonNode object
I noticed there is some support for MongoDB/BSON for the Jackson Project - jackson-datatype-mongo and BSON for Jackson, but I can not figure out how to use them to do the conversion more efficiently.
I was able to figure-out some solution using bson4jackson:
public static InputStream documentToInputStream(final Document document) {
BasicOutputBuffer outputBuffer = new BasicOutputBuffer();
BsonBinaryWriter writer = new BsonBinaryWriter(outputBuffer);
new DocumentCodec().encode(writer, document, EncoderContext.builder().isEncodingCollectibleDocument(true).build());
return new ByteArrayInputStream(outputBuffer.toByteArray());
}
public static JsonNode documentToJsonNode(final Document document) throws IOException {
ObjectMapper mapper = new ObjectMapper(new BsonFactory());
InputStream is = documentToInputStream(document);
return mapper.readTree(is);
}
I am not sure if this is the most efficient way, I am assuming it is still better solution than converting BSOn to String and parsing that string. There is an open Ticket in the mongoDB JIRA for adding conversion from Document, DBObject and BsonDocument to toBson and vice versa, which would simplify the whole process a lot.
Appreciate this isn't what the OP asked for - but might be helpful to some. I've managed to do this in reverse using MongoJack. The key thing is to use the JacksonEncoder which can turn any Json-like object into a Bson object. Then use BsonDocumentWriter to write it to a BsonDocument instance.
#Test
public void writeBsonDocument() throws IOException {
JsonNode jsonNode = new ObjectMapper().readTree("{\"wibble\": \"wobble\"}");
BsonDocument document = new BsonDocument();
BsonDocumentWriter writer = new BsonDocumentWriter(document);
JacksonEncoder transcoder =
new JacksonEncoder(JsonNode.class, null, new ObjectMapper(), UuidRepresentation.UNSPECIFIED);
var context = EncoderContext.builder().isEncodingCollectibleDocument(true).build();
transcoder.encode(writer,jsonNode,context);
Assertions.assertThat(document.toJson()).isEqualTo("{\"wibble\": \"wobble\"}");
}

How to display Json data in JavaFX

My JavaFX application handles large amounts Json data. How do I visualize the simplest way JSON data in a table that also must be editable?
The obvious method is to convert JSON to Java objects but for a number of reasons I would like to avoid that.
UPDATE, from comment below I have tried this(feeding ListView directly).
string json = "[{\"fields\":{\"VENDOR\":[\"xxx""],\"TYPE\":[\"yyyyy\"]}, \"path\": \"C:\"}]";
#FXML
private ListView idListView;
JsonReader reader = Json.createReader(new StringReader(json));
public JsonArray myItems = reader.readArray();
reader.close();
public ObservableList<JsonObject> olist;
oList = FXCollections.observableArrayList((JsonObject[])myItems.toArray())
idListView.setItems(oList);
Not working for me. What can I do diffently?
/regards
//lg
I followed this advise "You may convert the json data to map and follow Example 12-12 Adding Map Data to the Table. – Uluk Biy Jun 5 at 9:05"