Can you Export pm2 logs to .json, or csv? - pm2

Afternoon
The Marketing Team have lost alot of data at my current place of work.
I am running PM2 in putty
I am currently using pm2 logs 'applicationname' -1000 lines to get the data I need
This works
But is there a good way to export the data
perhaps a csv or .json
?

You should be able to find logs in
$HOME/.pm2/logs/*
and then it's easy to convert it to anything you want for xample: .log file read and convert to json

Related

Firebase: Exporting JSON Unable to export The size of data exported at a single location cannot exceed 256 MB

I used to download a node of firebase real-time database every day to monitor some outputs by exporting the .JSON file for that node. The JSON file itself is about 8MB.
Recently, I started receiving an error:
"Exporting JSON Unable to export The size of data exported at a single location cannot exceed 256 MB.Navigate to a smaller part of the database or use backups. Read more about limits"
Can someone please explain why I keep getting this error, since the JSON file I exported just yesterday was only 8.1 MB large.
I probably solved it! I disabled CORS addon in Chrome and suddenly it worked to export :)
To get rid of this, you can use Postman's Import feature because downloading a large JSON file sometimes faces failure in the middle of the way using a browser from the dashboard of the firebase. You can put the traditional cUrl commands on it. You just need to click save the response after the response is reached. To get rid of complex authentication complexity, you make the rule permission of the firebase database to read:true until the download is complete thought you need to ensure security for this. Postman also needs sometimes to preview the JSON even freezing the UI but you don't need to be bothered with it.

Failing to Upload Large JSON file to Firebase Real Time Database

I have a 1GB json file to upload to Firebase RTDB but when I press Import, it's loading for a while and then I get this Error:
There was a problem contacting the server. Try uploading your file again.
I have tried to upload a 30mb file and everything is ok.
It sounds like your file it too big to upload to Firebase in one go. There are no parameters to tweak here, and you'll have to use another means of getting the data into the database.
You might want to give the Firebase-Import library ago, the Firebase CLI's database:set command, or write your own import for your file format using the Firebase API.

Creating a text/csv file from LibreOffice

I am in the process of starting a project and i want to understand the best way to automate the creation of a text/CSV file containing the result of a request. And each time the database is updated, i want that file to be updated too. I'm using LibreOffice Base.
Hay,
LibreOffice Base is not going to help you in this case as it is just a GUI tool for querying a Connected DB.
I would look at getting your backend to append to a log/CSV file every time it receives a request and successfully obtains/manipulates data in the Database.

Formatting S3 BucketSize Logs for Logstash

I am trying for a solution to show the S3 BucketSize information in Kibana using ELK Stack. I need S3 BucketSize information stored in S3 Bucket and then read the files through Logstash. Hope this would solve viewing historical data. Since I am new to ELK, I am petty not sure how to get this work.
I have tried below things to work, but did not go well...
Using ncdu-s3 tool I have S3 BucketSize details in JSON format. I am trying to put that into Logstash and it is throwing error as below.
The JSON file format is
[1,0,{"timestamp":1469370986,"progver":"0.1","progname":"ncdu-s3"},
[{"name":"s3:\/\/BucketName\/FolderName1"},
[{"name":"FolderName1"},
{"dsize":107738,"name":"File1.rar"},
{"dsize":532480,"name":"File2.rar"},
[{"name":"FolderName2"},
{"dsize":108890,"name":"File3.rar"}]]]
I use the below command
curl -XPOST 'http://localhost:9200/test/test/1' -d #/home/ubuntu/filename.json
ERROR:
{"error":"MapperParsingException[Malformed content, must start with an object]","status":400}
I think I need to format the JSON file in order to work... Do anyone suggest a good way to go?

Migrating from Lighthouse to Jira - Problems Importing Data

I am trying to find the best way to import all of our Lighthouse data (which I exported as JSON) into JIRA, which wants a CSV file.
I have a main folder containing many subdirectories, JSON files and attachments. The total size is around 50MB. JIRA allows importing CSV data so I was thinking of trying to convert the JSON data to CSV, but all convertors I have seen online will only do a file, rather than parsing recursively through an entire folder structure, nicely creating the CSV equivalent which can then be imported into JIRA.
Does anybody have any experience of doing this, or any recommendations?
Thanks, Jon
The JIRA CSV importer assumes a denormalized view of each issue, with all the fields available in one line per issue. I think the quickest way would be to write a small Python script to read the JSON and emit the minimum CSV. That should get you issues and comments. Keep track of which Lighthouse ID corresponds to each new issue key. Then write another script to add things like attachments using the JIRA SOAP API. For JIRA 5.0 the REST API is a better choice.
We just went through a Lighthouse to JIRA migration and ran into this. The best thing to do is in your script, start at the top-level export directory and loop through each ticket.json file. You can then build a master CSV or JSON file to import into JIRA that contains all tickets.
In Ruby (which is what we used), it would look something like this:
Dir.glob("path/to/lighthouse_export/tickets/*/ticket.json") do |ticket|
JSON.parse(File.open(ticket).read).each do |data|
# access ticket data and add it to a CSV
end
end