Use JSON file in Gephi - json

I have a json file containing network data that I'd like to visualize using a network visualization software like Gephi. But Gephi does not accept this type of files. Is there a way to convert the json file or are there any other network visualization softwares out there that can? I am a Mac user.

You need to use networkx in Python, construct your graph and Export your graph in a format that Gephi can load such as gexfor graphml. See the link for examples of the supported formats. Apparently R is an option, too, provided that igraph has a json reader

Visjs can import Gephi JSON and display it as a graph/network:
http://visjs.org/examples/network/data/importingFromGephi.html
You probably could adapt your JSON to fit this format.

Related

How would i save a doc/docx/docm file into directory or S3 bucket using Pyspark

I am trying to save a data frame into a document but it returns saying that the below error
java.lang.ClassNotFoundException: Failed to find data source: docx. Please find packages at http://spark.apache.org/third-party-projects.html
My code is below:
#f_data is my dataframe with data
f_data.write.format("docx").save("dbfs:/FileStore/test/test.csv")
display(f_data)
Note that i could save files of CSV, text and JSON format but is there any way to save a docx file using pyspark?
My question here. Do we have the support for saving data in the format of doc/docx?
if not, Is there any way to store the file like writing a file stream object into particular folder/S3 bucket?
In short: no, Spark does not support DOCX format out of the box. You can still collect the data into the driver node (i.e.: pandas dataframe) and work from there.
Long answer:
A document format like DOCX is meant for presenting information in small tables with style metadata. Spark focus on processing large amount of files at scale and it does not support DOCX format out of the box.
If you want to write DOCX files programmatically, you can:
Collect the data into a Pandas DataFrame pd_f_data = f_data.toDF()
Import python package to create the DOCX document and save it into a stream. See question: Writing a Python Pandas DataFrame to Word document
Upload the stream to a S3 blob using for example boto: Can you upload to S3 using a stream rather than a local file?
Note: if your data has more than one hundred rows, ask the receivers how they are going to use the data. Just use docx for reporting no as a file transfer format.

CNC Manufacturability Analysis Autodesk Forge

I would like to build a CNC manufacturability analysis app using Autodesk Forge.
Setting the computational geometry algorithms aside, what kind of geometrical data can I extract from CAD files using this platform? also, is there an existing app I am unaware of?
Thanks
Depending on the input file format you can convert to different other formats:
Supported Translations
Though it's not obvious from the above list, all input formats support conversion to OBJ format, which can also be done at subcomponent level - so you don't have to export the whole model to OBJ.
Here is a sample that lets you access your files from A360 and convert them into whatever format is currently supported for them:
Model Derivative API sample
Source code: https://github.com/Autodesk-Forge/model.derivative-nodejs-sample

How can I import three.js's JSON into Maya?

I would like to import JSON(made by three.js) data into Maya.
I found exporter of Maya, but couldn't find importer of Maya.
Is there good way to do it?
There are currently no Three.js JSON importer.
The Three.js JSON is meant to be a runtime format, used by Three.js for rendering in WebGL. Usually, you would export to the JSON format when you want to use it for the web. It is not meant to be a storage format.
There are other more common "interchange" formats like FBX, Collada, or OBJ, that are meant to be for storage and for passing around between different people and software packages.

Convert Json file into GraphJSON to be imported into Titan

I have been looking at ways to convert a JSON file into a GraphJSON graph and I have come across the GraphJSON Reader and Writer Library.
However, what I do not really understand is whether I can read out directly from a path where a JSON file resides and parse it into a graph/GraphJSON.
Can you help?
This is how I would solve this issue:
Read your JSON files using GSON or Jackson, then
Feed this data into a subclass of Vertex/Edge of your implementation of these Tinkerpop 3 interfaces.
Use the GraphSON writer methods to "graphitise" your data, save your data into an OutputStream.
I'm assuming you're using Tinkerpop3 and Titan 1.0.0, this is the right documentation.
Good luck!
P.S: If you're doing this for the sack of importing data into Titan, you might be overcomplicating the issue of data import. Just import it straight away.

MarkLogic Java API batch upload files (.csv)

Im trying out the MarkLogic Java API and would want to bulk upload some files with the extension .csv
I'm not sure what to use, since the Java API only supports JSON, XML, and TXT files.
How do I batch upload files using the MarkLogic Java api? Do i convert everything to JSON?
Do i convert everything to JSON?
Yes, that is a common way to do it.
If you would like additional examples of how you can wrangle CSV with the Java Client API, check out OpenCSVBatcherExample and JacksonDatabindTest.testDatabindingThirdPartyPojoWithMixinAnnotations. The first demonstrates converting the csv to XML and using a custom REST extension. The second example (well, unit test...) demonstrates converting the csv to JSON and using the batch upload (Bulk Writes) capabilities Justin linked to.
If you have CSV files on your filesystem, I’d start with mlcp, as suggested above. It will handle all of the parsing and splitting into multiple transactions/batches for you. Take a look at the mlcp documentation for more details and some example configurations.
If you’d like more control over the parsing and splitting logic than mlcp gives you out-of-the-box or you’re getting CSV from some other source (i.e. not files on the filesystem), you can use the Java Client API. The Java Client API allows you to efficiently write batches using a WriteSet. Take a look at the “Bulk Writes” example.
According to your reply to Justin, you cannot use MLCP because it is command line and you need to integrate it into a web portal.
Well, MLCP is released as open cource software under the Apache2 licence. So if you are happy with this licence, then you have the source to integrate.
But what I see as your main problem statement is more specific:
How can I create miltiple XML OR JSON documents from a CSV file [allowing the use of the java API to then upload them as documents in MarkLogic]
With that specific problem statement:
1) have a look at SplitDelimitedTextReader.java from the mlcp source
2) try some java libraries for this purpose such as http://jsefa.sourceforge.net/quick-tutorial.html