How do I store nested JSON objects directly in ABAP DDIC? - json

ABAP Databases, oracle, MaxDB et al., are mostly RDBMS. Right now, I have a JSON structure that cannot be normalised and hence I want to store it as is. So, I want a MongoDB like Object store in ABAP.
What's the best way to achieve this? Is data cluster an option? Perhaps the only option?

I don't think you can connect to some other then supported DBs directly from ABAP. If you have Netweaver Java, you can call some custom Java application, which accesses MongoDB. You can check SAP Hana if there is something similar.
In ABAP you interact with RDBMS via ABAP Dictionary.
It supports data types like LCHR, STRING, RAWSTRING. Checkout docs for more details.

Data cluster is one option, but you can simply use a binary type DB field for storing the JSON data.
There is a method called transformation in ABAP, which converts from ABAP data to XML/JSON data and vice-versa.
There's a simple example on the following blog:
https://blogs.sap.com/2013/07/04/abap-news-for-release-740-abap-and-json/
Comments on the blog page contain more info.

Related

Converting REST API JSON schema into a CQL Cassandra schema

I want download data from a Rest API into a database.The data I want save are typed objects, like java object. I have chosen cassandra because it support the type Array type, Map type, versus standard SQLdatabase(Mysql, Sqlite,..). It is better to serialize java object.
In first, I should create the tables CQL from json schema of REST API. How it is possible to generate CQL table from json schema of REST API.
I know openapi-generator can generate mysql schema from json schema, but don't support CQL for the moment. So I need to search a alternative solution.
I haven't used off-the-shelf packages extensively to manage Cassandra schema but there are possibly open-source projects or software like Hackolade that might do it for you.
https://cassandra.link/cassandra.toolkit/ managed by Anant (I don't have any affiliation) has an extensive list of resources you might be interested in. Cheers!

convert mongoDB Collection to mySQL Database

I was created my project in spring 4 MVC + Hibernate with MongoDB. now, I have to convert it into the Hibernate with MySQL. my problem is I have too many collections in MongoDB the format of bson and json. how can I convert that file into MySQL table format? is that possible?
Mongodb is a non-relational database, while MySQL is relational. The key difference is that the non relational database contains documents (JSON objects) which can contain hierarchical structure, where as the relational database expects the objects to be normalised, and broken down into tables. It is therefore not possible to simply convert the bson data from MongoDB into something which MySQL will understand. You will need to write some code that will read the data from MongoDB and the write it into MySQL.
The documents in your MongoDB collections represent serialised forms of some classes (POJOs, domain object etc) in your project. Presumably, you read this data from MongoDB deserialise it into its class form and use it in your project e.g. display it to end users, use it in calculations, generate reports from it etc.
Now, you'd prefer to host that data in MySQL so you'd like to know how to migrate the data from MongoDB to MySQL but since the persistent formats are radically different you are wondering how to do that.
Here are two options:
Use your application code to read the data from MongoDB, deserialise it into your classes and then write that data into MySQL using JDBC or an ORM mapping layer etc.
Use mongoexport to export the data from MongoDB (in JSON format) and then write some kind of adapter which is capable of mapping this data into the desired format for your MySQL data model.
The non functionals (especially for the read and write aspects) will differ between these approaches but fundamentally both approaches are quite similar; they both (1) read from MongoDB; (2) map the document data to the relational model; (3) write the mapped data into MySQL. The trickiest aspect of this flow is no. 2 and since only you understand your data and your relational model there is no tool which can magically do this for you. How would a thirdparty tool be sufficiently aware of your document model and your relational model to be able to perform this transformation for you?
You could investigate a MongoDB JDBC driver or use something like Apache Drill to facilitate JDBC queries onto your Mongo DB. Since these could return java.sql.ResultSet you would be dealing with a result format which is more suited for writing to MySQL but it's likely that this still wouldn't match your target relational model and hence you'd still need some form of transformation code.

Correct way to store data in SQL database when columns are unknown

So the situation that I have is,I am developing a form builder like application which needs to be custom for all users. The form is hosted and response collected in database. Now what is the correct way to do the same in mysql like database.
For an example assume two forms, one with a text field and another with radio button and text field. Also once that model is created is there any way to use django forms, or will I have to go some other way.
Recently mysql introduced JSON fields.
As of MySQL 5.7.8, MySQL supports a native JSON data type that enables
efficient access to data in JSON (JavaScript Object Notation)
documents. The JSON data type provides these advantages over storing
JSON-format strings in a string column:
Even if you don't have the latest version of mysql it's still possible to save JSON data in a varchar field and is quite a popular solution supported by many third party libraries that provide JSON support for Django.
The reason that a third party library is needed is because Django doesn't have a built in JSONField. One has been added recently for Postgresql but mysql is still lagging behind.
Alternative that does not involve mysql is to use redis. Django has excellent support for redis and as you know redis hashes are very similar to python dictionaries. ORM support requires third party libraries as with mysql json fields. However it's simpler to think of redis as a python dictionary that can be persisted across sessions and queried very fast. Last but not least the hash is just the tip of the iceberge.

JSON validation against a schema (Java EE application)

I have a use case where I need to validate JSON objects against a schema that can change real time..
Let me explain my requirements..
I persist JSON objects (MongoDB).
Before persisting I MUST validate the data type of some of the
fields of JSON objects (mentioned in #1) against a schema.
I persist the schema in mongodb.
I always validate the JSON objects against the latest schema available in db. (so I dont think it matters much even if the schema can change in real time for me it is kinda static).
I am using a J2EE stack (Spring Framework).
Can anyone guide me here..?
Another way of doing it is to use an external library https://github.com/fge/json-schema-validator to do the work for you. The one I proposed supports draft 4 of JSON Schema.
The IBM DataPower appliance has JSON Schema validation support. This will allow you to offload validation to an appliance that is designed for it along with routing of data within te enterprise.

Create some tool for converting data from one database to another

This is kind of implementation question maybe. I wonder if I where to make a tool to convert some relational database to some other kind of database. What would the approach be?
If I for example want to convert data and the structure from a mysql database to mssql. Would I need to use regular expression to parse the SQL-file? Or maybe I could convert it to XML or JSON first and from that structure parse into my targeted database?
Using existing tools for converting mysql to mssql or anything similar is not in this scope. Since I want to know how it is actually done.
Well it's kind of a broad question, but generally speaking, having your own abstract representation of the structure and data would be a good thing, because you could extend your system "easily" by writing importers and exporters, and actually decouple your code a little by abstracting the relational db concepts into your own format.
The importers would "reverse engineer" a given database, by converting it to your own representation (as you say, xml/json or even your own query language -that would be better I guess-). Then the exporters would just convert from your format to the requested SQL dialect. No regular expressions, no other stuff "hardcoded".
This will allow you to extend your system and support a bigger number of sources and targets, and also handle errors like some SQL features from a "source" not supported in the selected "target".
My 2 cents, hope it helps!