Import into Elasticsearch from repeatedly created JSON but store old values - mysql

New to Elasticsearch and Kibana so please bear with me. I'm not using Logstash but JSON files to import the information I need. Basically, Kibana is used for monitoring change in values in MySQL database over time. Right now, transfer of information works by a script to 1)delete previous versions of JSON files containing my information, 2)export MySQL information into JSON format files, and 3)Re-importing the newly created JSON files.
Each row of my data has a timestamp. Here comes the problem. The old versions of the information I imported are no longer reflected in Kibana (maybe cause the previous files I deleted?). Is there a way to keep the information with the old timestamps and simultaneously import the new ones?

If you were to leave the old documents in elasticsearch and insert new ones, then you would be able to see/manage/etc both. Elasticsearch will issue new IDs for the new documents, and all should be well.

Related

I want to compare the data I have in csv file to the data which is in ldap produciton server

I want to compare the data I have in csv file to the data which is in ldap produciton server.
There are thousands of users data in csv file and i want to compare the data with the data in production server.
Let's suppose I have user ID xtz12345 in the csv file with uid number 123456. Now I want to cross check the uidNumber of the same user ID xtz12345 in the production server.
Is there any way I can automate this? There are thousands of UserID to be checked and if i do it manually it probably gonna take a lot of time. Can anyone suggest what should I do?
Powershell script is good start place.
import activedirectory module (assuming Windows ADdownload and install RSAT tools, here) in Powershell to fetch information from AD, example
use import-csv in powershell to read csv values. Now, compare first with second. example
Happy to help

Creating a text/csv file from LibreOffice

I am in the process of starting a project and i want to understand the best way to automate the creation of a text/CSV file containing the result of a request. And each time the database is updated, i want that file to be updated too. I'm using LibreOffice Base.
Hay,
LibreOffice Base is not going to help you in this case as it is just a GUI tool for querying a Connected DB.
I would look at getting your backend to append to a log/CSV file every time it receives a request and successfully obtains/manipulates data in the Database.

Importing json file to Kibana via UI

I've just move from 5.2.1 to 5.2.2 (own bug fixed)
Before I migrated, I've export all queries/ searches to json file in order to upload it to the new Kibana version
At first, I've update the ES version and to make sure all works, I reopen Kibana 5.2.1 and import json file. All good :)
Afterward, updating to Kibana 5.2.2.
When I open it all searches, visualize and dashboard were in. Is this the proper and straightforward way to copy my data when updating version?
Or maybe to use like in this question?
Tnx
Ok I got it and it's quite simple :)
when creating queries/ visualize in kibana, it saved it to .
kibana default index pattern (in config file) in ES. So, when updating Kibana's version and reading from the same ES version, data will appear in the UI.
In case user wish to save it to other index pattern, he should change it in config file.
For more reading see here

How to import CSV files into Firebase

I see we can import json files into firebase.
What I would like to know is if there is a way to import CSV files (I have files that could have about 50K or even more records with about 10 columns).
Does it even make sense to have such files in firebase ?
I can't answer if it make sense to have such files in Firebase, you should answer that.
I also had to upload CSV files to Firebase and I finally transformed my CSV into JSON and used firebase-import to add my Json into Firebase.
there's a lot of CSV to JSON converters (even online ones). You can pick the one you like the most (I personnaly used node-csvtojson).
I've uploaded many files (tab separated files) (40MB each) into firebase.
Here are the steps:
I wrote a Java code to translate TSV into JSON files.
I used firebase-import to upload them. To install just type in cmd:
npm install firebase-import
One trick I used on top of all the one already mentioned is to synchronize a google spreadsheet with firebase.
You create a script that upload directly to firebase db base on row / columns. It worked quite well and can be more visual for fine tuning the raw data compared to csv/json format directly.
Ref: https://www.sohamkamani.com/blog/2017/03/09/sync-data-between-google-sheets-and-firebase/
Here is the fastest way to Import your CSV to Firestore:
Create an account in Jet Admin
Connect Firebase as a DataSource
Import CSV to Firestore
Ref:
https://blog.jetadmin.io/how-to-import-csv-to-firestore-database-without-code/

Migrating from Lighthouse to Jira - Problems Importing Data

I am trying to find the best way to import all of our Lighthouse data (which I exported as JSON) into JIRA, which wants a CSV file.
I have a main folder containing many subdirectories, JSON files and attachments. The total size is around 50MB. JIRA allows importing CSV data so I was thinking of trying to convert the JSON data to CSV, but all convertors I have seen online will only do a file, rather than parsing recursively through an entire folder structure, nicely creating the CSV equivalent which can then be imported into JIRA.
Does anybody have any experience of doing this, or any recommendations?
Thanks, Jon
The JIRA CSV importer assumes a denormalized view of each issue, with all the fields available in one line per issue. I think the quickest way would be to write a small Python script to read the JSON and emit the minimum CSV. That should get you issues and comments. Keep track of which Lighthouse ID corresponds to each new issue key. Then write another script to add things like attachments using the JIRA SOAP API. For JIRA 5.0 the REST API is a better choice.
We just went through a Lighthouse to JIRA migration and ran into this. The best thing to do is in your script, start at the top-level export directory and loop through each ticket.json file. You can then build a master CSV or JSON file to import into JIRA that contains all tickets.
In Ruby (which is what we used), it would look something like this:
Dir.glob("path/to/lighthouse_export/tickets/*/ticket.json") do |ticket|
JSON.parse(File.open(ticket).read).each do |data|
# access ticket data and add it to a CSV
end
end