How to export multiple STIX2 bundles? - json

I have been working on Threat Feed Recently, I am trying to export all the threats that I have found on my system as STIX2 format.
I have gone through the it's documentation and I have created the Stix2 bundles. Now I am trying to figure out the way to export all the bundles to one file so that this file can be used in threat feeds, For instance splunk is a tool that allows to read this stix feed.
A single bundle looks like this.
{"type":"bundle","id":"bundle--92bf6237-1b3d-43f9-85d7-e31c0b3f11b7","spec_version":"2.0","objects":[{"id":"indicator--c68d9454-32e3-4f30-8321-a6758df83877","type":"indicator","created":"2017-10-27T00:00:00.000Z","modified":"2017-10-27T00:00:00.000Z","name":"File hash for malware variant","pattern":"[file:hashes.md5 = 'a5ef29d5315111c80a5c1abad14c8972']","valid_from":"2017-10-27T00:00:00.000Z","labels":["malicious-activity"]},{"type":"malware","id":"malware--0e1f009a-0e6c-437e-a1ec-0c003522b1d3","created":"2017-10-27T00:00:00.000Z","modified":"2017-10-27T00:00:00.000Z","name":"Malware","labels":["remote-access-trojan"]},{"type":"relationship","id":"relationship--c17cf161-4ec2-471c-ac58-4a1cf6e0964f","created":"2017-10-27T00:00:00.000Z","modified":"2017-10-27T00:00:00.000Z","relationship_type":"indicates","source_ref":"indicator--c68d9454-32e3-4f30-8321-a6758df83877","target_ref":"malware--0e1f009a-0e6c-437e-a1ec-0c003522b1d3"}]}
Now I have multiple bundles how can I export them?

In a TAXII feed, it is one STIX2 bundle per content block. You can use OpenTAXII server (supports TAXII 1.1 and is content agnostic) to serve that feed to your Splunk instance.
If you read data from files, store every bundle in a separate JSON file.

Related

Angular merge multiple JSON files into one

I am trying to configure the i18next framework for translations in my angular app. I would like to have multiple JSON files (one per view or part of application) e.g home, catalogue.
The way I want to approach this is I want to put all of these in the assets folder under translations so the structure will look like this.
-home.json
-catalogue.json
Then eventually when the build runs I want to bundle all of these into a single json file: for example en-GB.json.
The problem I am facing is I don't know how to go about this. I can't locate the webpack config and if I was to introduce one, what would be the impact?
In short how can I bundle multiple JSON files into one file.

MarkLogic Java API batch upload files (.csv)

Im trying out the MarkLogic Java API and would want to bulk upload some files with the extension .csv
I'm not sure what to use, since the Java API only supports JSON, XML, and TXT files.
How do I batch upload files using the MarkLogic Java api? Do i convert everything to JSON?
Do i convert everything to JSON?
Yes, that is a common way to do it.
If you would like additional examples of how you can wrangle CSV with the Java Client API, check out OpenCSVBatcherExample and JacksonDatabindTest.testDatabindingThirdPartyPojoWithMixinAnnotations. The first demonstrates converting the csv to XML and using a custom REST extension. The second example (well, unit test...) demonstrates converting the csv to JSON and using the batch upload (Bulk Writes) capabilities Justin linked to.
If you have CSV files on your filesystem, I’d start with mlcp, as suggested above. It will handle all of the parsing and splitting into multiple transactions/batches for you. Take a look at the mlcp documentation for more details and some example configurations.
If you’d like more control over the parsing and splitting logic than mlcp gives you out-of-the-box or you’re getting CSV from some other source (i.e. not files on the filesystem), you can use the Java Client API. The Java Client API allows you to efficiently write batches using a WriteSet. Take a look at the “Bulk Writes” example.
According to your reply to Justin, you cannot use MLCP because it is command line and you need to integrate it into a web portal.
Well, MLCP is released as open cource software under the Apache2 licence. So if you are happy with this licence, then you have the source to integrate.
But what I see as your main problem statement is more specific:
How can I create miltiple XML OR JSON documents from a CSV file [allowing the use of the java API to then upload them as documents in MarkLogic]
With that specific problem statement:
1) have a look at SplitDelimitedTextReader.java from the mlcp source
2) try some java libraries for this purpose such as http://jsefa.sourceforge.net/quick-tutorial.html

How to import CSV files into Firebase

I see we can import json files into firebase.
What I would like to know is if there is a way to import CSV files (I have files that could have about 50K or even more records with about 10 columns).
Does it even make sense to have such files in firebase ?
I can't answer if it make sense to have such files in Firebase, you should answer that.
I also had to upload CSV files to Firebase and I finally transformed my CSV into JSON and used firebase-import to add my Json into Firebase.
there's a lot of CSV to JSON converters (even online ones). You can pick the one you like the most (I personnaly used node-csvtojson).
I've uploaded many files (tab separated files) (40MB each) into firebase.
Here are the steps:
I wrote a Java code to translate TSV into JSON files.
I used firebase-import to upload them. To install just type in cmd:
npm install firebase-import
One trick I used on top of all the one already mentioned is to synchronize a google spreadsheet with firebase.
You create a script that upload directly to firebase db base on row / columns. It worked quite well and can be more visual for fine tuning the raw data compared to csv/json format directly.
Ref: https://www.sohamkamani.com/blog/2017/03/09/sync-data-between-google-sheets-and-firebase/
Here is the fastest way to Import your CSV to Firestore:
Create an account in Jet Admin
Connect Firebase as a DataSource
Import CSV to Firestore
Ref:
https://blog.jetadmin.io/how-to-import-csv-to-firestore-database-without-code/

db.json file is created and added to .gitignore using hexo.io

I have been trying to find what a db.json is and why it is being automatically genereated. All the documentation says in hexo.io is:
$ hexo clean
Cleans the cache file (db.json) and generated files (public).
What is this exactly? Since these are all static pages, is this some sort of makeshift database?
most commonly db.json is used when you're running a server using hexo server. I believe its for performance improvements. It doesn't affect the generation (hexo generate) and deployments(hexo deploy)
db.json file stores all the data needed to generate your site. There are all posts post, tags, categories etc. The data is stored in a JSON formatted string so it's easier and faster to parse the data and generate the site.

Migrating from Lighthouse to Jira - Problems Importing Data

I am trying to find the best way to import all of our Lighthouse data (which I exported as JSON) into JIRA, which wants a CSV file.
I have a main folder containing many subdirectories, JSON files and attachments. The total size is around 50MB. JIRA allows importing CSV data so I was thinking of trying to convert the JSON data to CSV, but all convertors I have seen online will only do a file, rather than parsing recursively through an entire folder structure, nicely creating the CSV equivalent which can then be imported into JIRA.
Does anybody have any experience of doing this, or any recommendations?
Thanks, Jon
The JIRA CSV importer assumes a denormalized view of each issue, with all the fields available in one line per issue. I think the quickest way would be to write a small Python script to read the JSON and emit the minimum CSV. That should get you issues and comments. Keep track of which Lighthouse ID corresponds to each new issue key. Then write another script to add things like attachments using the JIRA SOAP API. For JIRA 5.0 the REST API is a better choice.
We just went through a Lighthouse to JIRA migration and ran into this. The best thing to do is in your script, start at the top-level export directory and loop through each ticket.json file. You can then build a master CSV or JSON file to import into JIRA that contains all tickets.
In Ruby (which is what we used), it would look something like this:
Dir.glob("path/to/lighthouse_export/tickets/*/ticket.json") do |ticket|
JSON.parse(File.open(ticket).read).each do |data|
# access ticket data and add it to a CSV
end
end