I want to use a JdbcTemplate and the Neo4j JDBC driver to query my neo4j database and return a JSON string.
Is there an existing method to do this?
I've googled and I can't find one.
It otherwise looks like a matter of creating a home cooked RowMapper as per here.
The query :
MATCH (s:Site) - [r] - (ss:SiteState) return s,ss;
it return a json but for my use i use an object
public class SiteRowMapper implements RowMapper<Site> {
#Override
public Site mapRow(ResultSet rs, int rowNum) throws SQLException {
Site site = new Site();
SiteState siteState = new SiteState();
Gson json = new Gson();
site = json.fromJson(rs.getString("s"), Site.class);
siteState = json.fromJson(rs.getString("ss"), SiteState.class);
site.setName(siteState.getName());
return site;
}
}
Related
I am trying to receive and access JSON data from a Kafka Topic in Flink. What works is, producing data, send it to a Kafka Topic und receive it in Flink as String. But I want to access the data in an object-oriented way (e.g. extract a specific atrribute from every message)?
Therefore I have a Kafka Producer which sends data (e.g. every 1s) to a Kafka Topic:
ObjectMapper test = new ObjectMapper();
ObjectNode jNode= test.createObjectNode();
jNode.put("LoPos", longPos)
.put("LaPos", latPos)
.put("Timestamp", timestamp.toString());
ProducerRecord<String, ObjectNode> rec = new ProducerRecord<String, ObjectNode>(topicName, jNode);
producer.send(rec);
so the JSON data looks like this:
{"LoPos":10.5,"LaPos":2.5,"Timestamp":"2022-10-31 12:45:19.353"}
What works is, receiving the data and print it as string:
DataStream<String> input =
env.fromSource(
KafkaSource.<String>builder()
.setBootstrapServers("localhost:9092")
.setBounded(OffsetsInitializer.latest())
.setValueOnlyDeserializer(new SimpleStringSchema())
.setTopics(topicName)
.build(),
WatermarkStrategy.noWatermarks(),
"kafka-source");
Print the data as string:
DataStream<String> parsed = input.map(new MapFunction<String, String>() {
private static final long serialVersionUID = -6867736771747690202L;
#Override
public String map(String value) {
System.out.println(value);
return "test";
How can I receive the data in Flink and access it in an object-oriented way (e.g. extract LoPos from every message)? Which approach would you recommend? I tried it with JSONValueDeserializationSchema, but without success...
Thanks!
Update1:
I updated to Flink 1.16 to use JsonDeserializationSchema.
Then I created a Flink Pojo Event like this:
public class Event {
public double LoPos;
public double LaPos;
public Timestamp timestamp;
public Event() {}
public Event(final double LoPos, final double LaPos, final Timestamp timestamp) {
this.LaPos=LaPos;
this.LoPos=LoPos;
this.timestamp=timestamp;
}
#Override
public String toString() {
return String.valueOf(LaPos);
}
}
To read the JSON data, I implemented the following:
KafkaSource<Event> source = KafkaSource.<Event>builder()
.setBootstrapServers("localhost:9092")
.setBounded(OffsetsInitializer.earliest())
.setValueOnlyDeserializer(new JsonDeserializationSchema<>(Event.class))
.setTopics("testTopic2")
.build();
DataStream<Event> test=env.fromSource(source, WatermarkStrategy.noWatermarks(), "test");
System.out.println(source.toString());
System.out.println(test.toString());
//test.sinkTo(new PrintSink<>());
test.print();
env.execute();
So I would expect, when using source.toString() the value of LaPos is getting returned. But all I get is:
org.apache.flink.connector.kafka.source.KafkaSource#510f3d34
What am I doing wrong?
This topic is covered in one of the recipes in the Immerok Apache Flink Cookbook.
In the examples below, I'm assuming Event is a Flink POJO.
With Flink 1.15 or earlier, you should use a custom deserializer:
KafkaSource<Event> source =
KafkaSource.<Event>builder()
.setBootstrapServers("localhost:9092")
.setTopics(TOPIC)
.setStartingOffsets(OffsetsInitializer.earliest())
.setValueOnlyDeserializer(new EventDeserializationSchema())
.build();
The deserializer can be something like this:
public class EventDeserializationSchema extends AbstractDeserializationSchema<Event> {
private static final long serialVersionUID = 1L;
private transient ObjectMapper objectMapper;
/**
* For performance reasons it's better to create on ObjectMapper in this open method rather than
* creating a new ObjectMapper for every record.
*/
#Override
public void open(InitializationContext context) {
// JavaTimeModule is needed for Java 8 data time (Instant) support
objectMapper = JsonMapper.builder().build().registerModule(new JavaTimeModule());
}
/**
* If our deserialize method needed access to the information in the Kafka headers of a
* KafkaConsumerRecord, we would have implemented a KafkaRecordDeserializationSchema instead of
* extending AbstractDeserializationSchema.
*/
#Override
public Event deserialize(byte[] message) throws IOException {
return objectMapper.readValue(message, Event.class);
}
}
We've made this easier in Flink 1.16, where we've added a proper JsonDeserializationSchema you can use:
KafkaSource<Event> source =
KafkaSource.<Event>builder()
.setBootstrapServers("localhost:9092")
.setTopics(TOPIC)
.setStartingOffsets(OffsetsInitializer.earliest())
.setValueOnlyDeserializer(new JsonDeserializationSchema<>(Event.class))
.build();
Disclaimer: I work for Immerok.
My aim is to read a CSV file, convert it to JSON and send the generated JSON one by one to ActiveMQ queue. My Code below:
final BindyCsvDataFormat bindy=new BindyCsvDataFormat(camelproject.EquityFeeds.class);
ConnectionFactory connectionFactory = new ActiveMQConnectionFactory("tcp://localhost:61616");
CamelContext _ctx = new DefaultCamelContext();
_ctx.addComponent("jms", JmsComponent.jmsComponentAutoAcknowledge(connectionFactory));
_ctx.addRoutes(new RouteBuilder() {
public void configure() throws Exception {
from("file:src/main/resources?fileName=data-sample.csv")
.unmarshal(bindy)
.marshal()
.json(JsonLibrary.Jackson).log("${body}")
.to("file:src/main/resources/?fileName=emp.json");
}
});
EquityFeeds is my POJO class in the above code.
Issues:
No Output is produced. "emp.json" file does not get generated at the given location.
Also how do I split the generated JSON into individual JSON's and send it to ActiveMQ queue like what I did for XML as below:
.split(body().tokenizeXML("equityFeeds", null)).streaming().to("jms:queue:xml.upstream.queue");
EquityFeeds (POJO):
#CsvRecord(separator = ",",skipFirstLine = true)
public class EquityFeeds {
#DataField(pos = 1)
private String externalTransactionId;
#DataField(pos = 2)
private String clientId;
#DataField(pos = 3)
private String securityId;
#DataField(pos = 4)
private String transactionType;
#DataField(pos = 5)
private Date transactionDate;
#DataField(pos = 6)
private float marketValue;
#DataField(pos = 7)
private String priorityFlag;
// getters and setters...
}
Please kindly help. Please tell me where I am going wrong. Need help desperately. Stuck in this issue and not able to move forward. Any help would be highly appreciated. I have really tried hard, searched Google and tried various options but nothing is working.
Please Note: I commented the .marshal() and .json() to check if the .unmarshal() is working but the unmarshal is also not working as "emp.json" is not getting created.
If nothing happens at all when starting the route then it is most likely due to the relative path you passed to the file component. Probably the execution directory of your Java process is not where you think it is and the file is not found. To simplify things I suggest you start with an absolute path. Once everything else is working figure out the correct relative path (your base should be the value of the user.dir system property).
Re your question about splitting the contents: This is answered in the documentation.
This works for me (Camel 3.1):
public class CsvRouteBuilder extends EndpointRouteBuilder {
#Override
public void configure() {
DataFormat bindy = new BindyCsvDataFormat(BindyModel.class);
from(file("/tmp?fileName=simpsons.csv"))
.unmarshal(bindy)
.split(body())
.log("Unmarshalled model: ${body}")
.marshal().json()
.log("Marshalled to JSON: ${body}")
// Unique file name for the JSON output
.setHeader(Exchange.FILE_NAME, () -> UUID.randomUUID().toString() + ".json")
.to(file("/tmp"));
}
}
// Use lombok to generate all the boilerplate stuff
#ToString
#Getter
#Setter
#NoArgsConstructor
// Bindy record definition
#CsvRecord(separator = ";", skipFirstLine = true, crlf = "UNIX")
public static class BindyModel {
#DataField(pos = 1)
private String firstName;
#DataField(pos = 2)
private String middleName;
#DataField(pos = 3)
private String lastName;
}
Given this input in /tmp/simpsons.csv
firstname;middlename;lastname
Homer;Jay;Simpson
Marge;Jacqueline;Simpson
the log output looks like this
Unmarshalled model: RestRouteBuilder.BindyModel(firstName=Homer, middleName=Jay, lastName=Simpson)
Marshalled to JSON: {"firstName":"Homer","middleName":"Jay","lastName":"Simpson"}
Unmarshalled model: RestRouteBuilder.BindyModel(firstName=Marge, middleName=Jacqueline, lastName=Simpson)
Marshalled to JSON: {"firstName":"Marge","middleName":"Jacqueline","lastName":"Simpson"}
and two json files are written in /tmp.
I want to export query result to excel or csv file.
I am using hibernate struts.
Is there any query like 'into outfile' which can directly export excel to specified location?
In MySQL database, 'into outfile' query works fine but in hibernate it is not working.
I tried using native sql but it gives error 'couldn't execute bulk manipulation query' and anyhow I can not solve that.
I am using MySQL database.
If you are writing an web app and using spring you can do it by writing data to an output stream
Write a simple class to construct your response
public class CsvResponse {
private final String filename;
private final List<YourPojo> records;
public CsvResponse(List<YourPojo> records, String filename) {
this.records = records;
this.filename = filename;
}
public String getFilename() {
return filename;
}
public List<YourPojo> getRecords() {
return records;
}
}
Now write a message converter to write them to an output stream
public class CsvMessageConverter extends AbstractHttpMessageConverter<CsvResponse> {
public static final MediaType MEDIA_TYPE = new MediaType("text", "csv", Charset.forName("UTF-8"));
public CsvMessageConverter() {
super(MEDIA_TYPE);
}
protected boolean supports(Class<?> clazz) {
return CsvResponse.class.equals(clazz);
}
protected void writeInternal(CsvResponse response, HttpOutputMessage output) throws Exception {
output.getHeaders().setContentType(MEDIA_TYPE);
output.getHeaders().set("Content-Disposition", "attachment; filename=\"" + response.getFilename() + "\"");
OutputStream out = output.getBody();
CsvWriter writer = new CsvWriter(new OutputStreamWriter(out), '\u0009');
List<YourPojo> allRecords = response.getRecords();
for (int i = 1; i < allRecords.size(); i++) {
YourPojo aReq = allRecords.get(i);
writer.write(aReq.toString());
}
writer.close();
}
}
Add this Message converter to your app context config file
<mvc:annotation-driven>
<mvc:message-converters register-defaults="true">
<bean class="com.yourpackage.CsvMessageConverter"/>
</mvc:message-converters>
</mvc:annotation-driven>
Finally the controller will look like
#RequestMapping(value = "/csvData", method = RequestMethod.GET, produces="text/csv")
#ResponseBody
public CsvResponse getFullData(HttpSession session) throws IOException {
// get data
List<YourPojo> allRecords = yourService.getData();
return new CsvResponse(allRecords, "yourData.csv");
}
I've found a similar way using JAX RS here.
But the bottomline is you'll have to use a REST mechanism to get data into the output stream if you want to do it in proper way but if your only target is to get data into a file you can just get your data in a list and then simply write it to a file.
I have written a REST API which gets data from a database. I have a column in my table which stores a date. I am using timestamp format to store the date.
My issue is when I am fetching the data, I'm not able to display the date in the proper format. I am getting 1420655400000 instead of 2015-01-08 00:00:00.
Here is my controller:
#RequestMapping(value="/getEstimation" , method=RequestMethod.GET)
public List<Estimation1> getEstimation(ModelAndView model) throws IOException{
List<Estimation1> estdetail;
estdetail= estimation.getEstimationbyId(5);
return estdetail;
}
Implementation of getEstimationId(double):
#Override
public List<Estimation1> getEstimationbyId(double id) {
// TODO Auto-generated method stub
JdbcTemplate jdbcTemplate = new JdbcTemplate(dataSource);
String sql = "SELECT * FROM estimation where est_id=" +id;
List<Estimation1> estimdetails= jdbcTemplate.query(sql, new RowMapper<Estimation1>()
{
#Override
public Estimation1 mapRow(ResultSet rs, int rowNum) throws SQLException
{
Estimation1 aContact = new Estimation1();
aContact.setDate(rs.getTimestamp("est_date"));
aContact.setEst_contactperson(rs.getString("est_contact_person"));
aContact.setEst_customer(rs.getString("est_customer"));
aContact.setEst_revision(rs.getInt("est_revision"));
aContact.setEst_prjt(rs.getString("est_project"));
aContact.setEst_status(rs.getString("est_status"));
return aContact;
}
});
return estimdetails;
}
Here is the data which I am getting from the database after execution:
[{"date":1420655400000,"est_prjt":"project1","est_revision":0,"est_customer":null,"est_contactperson":"robert","est_status":null,"est_id":0.0,"ec":null}]**
What changes should I make to print the date in the proper format?
You need to give a hint to the Jackson's object mapper of the format in which you want your dates to be deserialized. Following should work out for you
#JsonFormat(shape= JsonFormat.Shape.STRING, pattern="yyyy-MM-dd HH:mm:ss")
private Timestamp date;
I m new to Web search. I dont know how to write the program in web service please help me out
in program i want to connect the web service to database then from there i am getting the data in json format
in client side i m using jquery mobile framework,jquery Ajax
suppose in database
id title
1 asd
2 asw
Here is an example which I have copied from some of my code.
WCF Interface definition
using System.Runtime.Serialization;
using System.ServiceModel;
using System.ServiceModel.Web;
[ServiceContract]
public interface IGraphDataProvider
{
[OperationContract]
[WebInvoke(Method = "GET", ResponseFormat = WebMessageFormat.Json, UriTemplate = "devices")]
List<string> get_devices();
}
WCF Implementation
public class GraphDataProvider : IGraphDataProvider
{
/**
* #brief Return a (possibly empty) list of devices listed in the configuration DB
* */
public List<string> get_devices()
{
// If you want to modify your headers...
// WebOperationContext.Current.OutgoingResponse.Headers["Access-Control-Allow-Origin"] = "*";
// Now just return a list of strings, WCF will convert to JSON
return getDevices();
}
}
That takes care of the JSON response. In case you don't know how to read your SQL DB, there are a couple ways.
You could use Entity Framework. It's easy and convenient, once you have it set up your code will look like:
public static List<string> getDevices()
{
var db_context= new CfgEntities();
var devices = from row in db_context.Devices
where !row.Device.StartsWith("#")
select row.Device.Trim();
return devices.Distinct().ToList();
}
Use the SQL Client from Microsoft. Your code will look like this:
using System.Data.SqlClient;
// ...
public static List<string> getDevices()
{
var sql_connection_ = new SqlConnection();
sql_connection_.ConnectionString = string.Format("Server=localhost; database={0}; Trusted_Connection=SSPI", dbName);
try
{
sql_connection_.Open();
}
// catch exceptions etc. If Open() worked then you have a connection.
string queryString = "SELECT [Device] from [Config].[dbo].[Devices]";
// Now I'm just copying shamelessly from MSDN...
SqlCommand command = new SqlCommand(queryString, sql_connection_);
SqlDataReader reader = command.ExecuteReader();
List<string> return_list = new List<string>();
while (reader.Read())
{
return_list.Add((string)reader[0]);
}
return return_list;
}