Technologies I am using to fetch data from my MySQL database is Spark 2.4.4 and Scala. I want to display that data in my Angular8 project. Any help on how to do it? I could not find any documentation regarding this.
I am not sure if this is a scala/spark related question. It sounds more towards system design of your project.
One solution is to use your Angular8 directly read from MySQL. There are tons of tutorials online.
Another solution is to use your spark/scala to read data and dump to CSV/JSON file at somewhere and use Angular8 to read in that file. The pros is that you can do some transformation before displaying your data. The cons is that there is latency between transformation and displaying. After reading the flat file into JSON it's up to you how to render that data on user's screen.
Related
My application requires some kind of data backup and some kind of data exchange between users, so what I want to achieve is the ability to export an entity but not the entire database.
I have found some help but for the full database, like this post:
Backup core data locally, and restore from backup - Swift
This applies to the entire database.
I tried to export a JSON file, this might work except that the entity I'm trying to export contains images as binary data.
So I'm stuck.
Any help exporting not the full database but just one entity or how to write a JSON that includes binary data.
Take a look at protobuf. Apple has an official swift lib for it
https://github.com/apple/swift-protobuf
Protobuf is an alternate encoding to JSON that has direct support for serializing binary data. There are client libraries for any language you might need to read the data in, or command-line tools if you want to examine the files manually.
I am currently trying to search a group of ebooks to learn more about C#. The aim is to ask a question get a page in one or multiple of the ebooks to read. I went to the g_suite chat team and they have kindly directed me to vision commands that was easy enough to follow to make multiple json files.
https://cloud.google.com/vision/docs/pdf
I want to implement this files in to AUTO ML Natural Language Processing. To do so, a CSV file is required.
I do not know how to create a CSV file that would get me past this point and I am currently stuck.
How to create a CSV file using gcloud command and should not the Json file be Jsonl file to be accepted?
thanks for your answer in advance
The output from the Vision API (service) is a JSON file written to Cloud Storage.
The input dataset to Auto ML expects the data to be in CSV format and stored in Cloud Storage.
This isn't a gcloud issue but a general data-transformation problem: transforming JSON to CSV.
Google Cloud includes services that could help you with this but I suggest you start by writing a script that converts the data (i.e. loads then parses the JSON file creating a CSV file in the required format for Auto ML).
You may want to Google to see whether others have done similar and use their code as a starting point.
NOTE IIUC your solution, while an interesting use of these technologies may be overkill. If you're looking to learn Vision API and Auto ML, great. If not, most of this content is available more directly as searchable HTML and text on the web and indeed Stack overflow exists to answer developer questions on a myriad of topics including C#.
I am new to GeoMesa. I mean I just typed geomesa command. So, after following the command line tools tutorial on GeoMesa website. I found some information on ingesting data to geomesa through a .csv file.
So, for my research:
I have a MySQL database storing all the information sent from an Android Application.
And I want to perform some geo spatial analytics on it.
Right now I am converting my MySQL table to .csv file and then ingest it into geomesa as adviced on GeoMesa website.
But my questions are:
Is there any other better option because data is in GB and its a streaming data, hence I have to make .csv file regularly?
Is there any API through which I can connect my MySQL database to geomesa?
Is there any way to ingest using .sql dump file because that would be more easier then .csv file?
Since you are dealing with streaming data, I'd point to two GeoMesa integrations:
First, you might want to check out NiFi for managing data flows. If that fits into your architecture, then you can use GeoMesa with NiFi.
Second, Storm is quite popular for working with streaming data. GeoMesa has a brief tutorial for Storm here.
Third, to ingest sql dumps directly, one option would be to extend the GeoMesa converter library to support them. So far, we haven't had that as a feature request from a customer or a contribution to the project. It'd definitely be a sensible and welcome extension!
I'd also point out the GeoMesa gitter channel. It can be useful for quicker responses.
Can't find any input plugin for Relational Databases in Logstash Documentation.
What is the best approach to import data from one Relational Database Table with logstash? Is to connect Elastic Search directly to the database using JDBC?
You'll need to use JDBC River (https://github.com/jprante/elasticsearch-river-jdbc) for loading JDBC data into elastic search (or write your own code to do it).
It looks like there are several JIRAs open requesting JDBC loading in Logstash, but they haven't been worked: https://logstash.jira.com/browse/LOGSTASH-1764
There's this
WIP: Under Development, NOT FOR PRODUCTION
This is a plugin for Logstash.
It is fully free and fully open source. The license is Apache 2.0, meaning you are pretty much free to use it however you want in whatever way.
Logstash provides infrastructure to automatically generate documentation for this plugin. We use the asciidoc format to write documentation so any comments in the source code will be first converted into asciidoc and then into html. All plugin documentation are placed under one central location.
So far, there is no any Logstash API for reading SQL.
For my recommendation, you can write a program/script such as JAVA/python to read the logs from sql database and write to a file. Then use logstash file
API to read from the file. The Logstash website has getting started tutorial. It is easy to learn.
Good Luck
I have a frontend app working with AngularJS. For the backend part, I plan to use the data dumps from an external API. These files span from 20MB to 3GB, and they are provided in xml.gz format.
My question is regarding the middle point between having these files and uploading them to a database. The idea is to have a database and to access it using REST from the frontend.
I'm considering node.js with a mysql module for the backend, but I don't really know what to to with the data files.
So, if I want the database to answer with json responses, do I have to convert the files to json before uploading them to the db? Or it doesn't matter?
Total n00by question, I know, but I'm kinda lost in the whole backend world.
Thanks!