Can't find any input plugin for Relational Databases in Logstash Documentation.
What is the best approach to import data from one Relational Database Table with logstash? Is to connect Elastic Search directly to the database using JDBC?
You'll need to use JDBC River (https://github.com/jprante/elasticsearch-river-jdbc) for loading JDBC data into elastic search (or write your own code to do it).
It looks like there are several JIRAs open requesting JDBC loading in Logstash, but they haven't been worked: https://logstash.jira.com/browse/LOGSTASH-1764
There's this
WIP: Under Development, NOT FOR PRODUCTION
This is a plugin for Logstash.
It is fully free and fully open source. The license is Apache 2.0, meaning you are pretty much free to use it however you want in whatever way.
Logstash provides infrastructure to automatically generate documentation for this plugin. We use the asciidoc format to write documentation so any comments in the source code will be first converted into asciidoc and then into html. All plugin documentation are placed under one central location.
So far, there is no any Logstash API for reading SQL.
For my recommendation, you can write a program/script such as JAVA/python to read the logs from sql database and write to a file. Then use logstash file
API to read from the file. The Logstash website has getting started tutorial. It is easy to learn.
Good Luck
Related
I have a project where I should read xml files with Apache Drill to process it , can someone tell me how I can configure it?
NB: I use Mapr distribution
I tried to add the configuration to the configuration UI but I get a error(see image)
enter image description here
Thanks in advance
You'll need to use a Drill distribution based on Apache Drill >= 1.19 for the XML format plugin.
So this is more of a Drill question than a MapR question.
There are two key steps here
make sure that Drill can access whatever you use to store your data (sounds your data is xml files in MapR (which is now called HPE Ezmeral Data Fabric))
make sure that Drill can understand the data you have. I am not current on Drill, but reading many kinds of XML should be doable.
For getting access, there are two major paths to accessing files on Ezmeral Data Fabric. One path is to mount the data fabric as a conventional file system on all the nodes running Drillbits. This is often done using NFS mounts, but can also be the FUSE driver provided with data fabric.
The other major approach to getting data access is to use the HDFS API framework to access data via maprfs://... path names. This requires installing the data fabric client on all of the nodes running Drillbits.
It sounds like you are running the version of Drill that is packaged with the old MapR or current HPE Ezmeral system. This is the easiest approach since the packaged version is integrated with the client libraries needed to use the HDFS API with maprfs:// resources (it also provides access to the tables and streams in the data fabric).
Technologies I am using to fetch data from my MySQL database is Spark 2.4.4 and Scala. I want to display that data in my Angular8 project. Any help on how to do it? I could not find any documentation regarding this.
I am not sure if this is a scala/spark related question. It sounds more towards system design of your project.
One solution is to use your Angular8 directly read from MySQL. There are tons of tutorials online.
Another solution is to use your spark/scala to read data and dump to CSV/JSON file at somewhere and use Angular8 to read in that file. The pros is that you can do some transformation before displaying your data. The cons is that there is latency between transformation and displaying. After reading the flat file into JSON it's up to you how to render that data on user's screen.
I am new to GeoMesa. I mean I just typed geomesa command. So, after following the command line tools tutorial on GeoMesa website. I found some information on ingesting data to geomesa through a .csv file.
So, for my research:
I have a MySQL database storing all the information sent from an Android Application.
And I want to perform some geo spatial analytics on it.
Right now I am converting my MySQL table to .csv file and then ingest it into geomesa as adviced on GeoMesa website.
But my questions are:
Is there any other better option because data is in GB and its a streaming data, hence I have to make .csv file regularly?
Is there any API through which I can connect my MySQL database to geomesa?
Is there any way to ingest using .sql dump file because that would be more easier then .csv file?
Since you are dealing with streaming data, I'd point to two GeoMesa integrations:
First, you might want to check out NiFi for managing data flows. If that fits into your architecture, then you can use GeoMesa with NiFi.
Second, Storm is quite popular for working with streaming data. GeoMesa has a brief tutorial for Storm here.
Third, to ingest sql dumps directly, one option would be to extend the GeoMesa converter library to support them. So far, we haven't had that as a feature request from a customer or a contribution to the project. It'd definitely be a sensible and welcome extension!
I'd also point out the GeoMesa gitter channel. It can be useful for quicker responses.
I'm trying to get all the historic information about a sensor of Fi Ware.
I've seen that Orion uses Cygnus to store historics in Cosmos. Is that information accesible or is it only possible to use IDAS to get it?
Where could I get more info about this?
The way you can consume the data is, in an incremental approach from the learning curve point of view:
Working with the raw data, either "locally" (i.e. logging into the Head Node of the cluster) by using the Hadoop commands, either "remotely" by using the WebHDFS/HttpFS REST API. Please observe within this approach you have to implement whichever analyzing logic you need, since Cosmos only allows you to manage, as said, raw data.
Working with Hive in order to query the data in a SQL-like approach. Again, you can do it locally by invoking the Hive CLI, or remotely by implementing your own Hive client in Java (there are some other languages) using the Hive libraries.
Working with MapReduce (MR) in order to implement strong analysis. In order to do this, you'll have to create your own MR-based application (typically in Java) and run it locally. Once you are done with the local run of the MR app, you can go with Oozie, which allows you to run such MR apps in a remote way.
My advice is you start with Hive (the step 1 is easy but does not provide any analyzing capabilities), first locally trying to execute some Hive queries, then remotely implementing your own client. If this kind of analysis is not enough for you, then move to MapReduce and Oozie.
All the documentation regarding Cosmos can be found in the FI-WARE Catalogue of enablers. Within this documentation, I would highlight:
Quick Start for Programmers.
User and Programmer Guide (functionality described in sections 2.1 and 2.2 is not currently available in FI-LAB).
Has anyone had much experience with data migration into and out of NetSuite? I have to export DB2 tables into MySQL, manipulate data, and then export ina CSV file. Then take a CSV file of accounts and manipulate the data again for accounts to match up from our old system to new. Anyone tried to do this in MySQL?
A couple of options:
Invest in a data transformation tool that connects to NetSuite and DB2 or MySQL. Look at Dell Boomi, IBM Cast Iron, etc. These tools allow you to connect to both systems, define the data to be extracted, perform data transformation functions and mappings and do all the inserts/updates or whatever you need to do.
For MySQL to NetSuite, php scripts can be written to access MySQL and NetSuite. On the NetSuite side, you can either do SOAP web services, or you can write custom REST APIs within NetSuite. SOAP is probably a bit slower than REST, but with REST, you have to write the API yourself (server side JavaScript - it's not hard, but there's a learning curve).
Hope this helps.
I'm an IBM i programmer; try CPYTOIMPF to create a pretty generic CSV file. I'll go to a stream file - if you have NetServer running you can map a network drive to the IFS directory or you can use FTP to get the CSV file from the IFS to another machine in your network.
Try Adeptia's Netsuite integration tool to perform ETL. You can also try Pentaho ETL for this (As far as I know Celigo's Netsuite connector is built upon Pentaho). Also Jitterbit does have an extension for Netsuite.
We primarily have 2 options to pump data into NS:
i)SuiteTalk ---> Using which we can have SOAP based transformations.There are 2 versions of SuiteTalk synchronous and asynchronous.
Typical tools like Boomi/Mule/Jitterbit use synchronous SuiteTalk to pump data into NS.They also have decent editors to help you do mapping.
ii)RESTlets ---> which are typical REST based architures by NS can also be used but you may have to write external brokers to communicate with them.
Depending on your need you can have whatever you need.IN most of the cases you will be using SuiteTalk to bring in data to Netsuite.
Hope this helps ...
We just got done doing this. We used an iPAAS platform called Jitterbit (similar to Dell Boomi). It can connect to mySql and to NetSuite and you can do transformations in the tool. I have been really impressed with the platform overall so far
There are different approaches, I like the following to process a batch job:
To import data to Netsuite:
Export CSV from old system and place it in Netsuite's a File Cabinet folder (Use a RESTlet or Webservices for this).
Run a scheduled script to load the files in the folder and update the records.
Don't forget to handle errors. Ways to handle errors: send email, create custom record, log to file or write to record
Once the file has been processed move the file to another folder or delete it.
To export data out of Netsuite:
Gather data and export to a CSV (You can use a saved search or similar)
Place CSV in File Cabinet folder.
From external server call webservices or RESTlet to grab new CSV files in the folder.
Process file.
Handle errors.
Call webservices or RESTlet to move CSV File or Delete.
You can also use Pentaho Data Integration, its free and the learning curve is not that difficult. I took this course and I was able to play around with the tool within a couple of hours.