I have a JSON file that is used in a monitoring software for monitor an specific device.
I want to monitor a similar device which I don't have the JSON file.
I know everything from the new device that is needed to "fill the blanks" in the existent JSON structure.
A brute force approach would be create a script that reads the data (of new device) from a craft input file and output the nodes and leafs in the same JSON structure.
Before I take this path I would like to know if there is some tool that could help me in this task. For sure this "wheel" is not new, I don't want to re-invent it again.
Anyone knows about a tool that uses a JSON as template and generate another one changing the values from other source ?
I am on linux, can write scripts in bash and python.
Related
I am working on a GUI (PyQt6) where the user loads some files and adds metadata for each. So for each file, I would like to have a window (QDialog) which is essentially a form with fields which are specified by a json schema.
I am new to PyQt and I can code the mechanism to open a form (QDialog with QLabels and QLineEdit which essentially looks like a form. see image). However, I would like to automatize this step using the schema that's stored in a json file (consider any example json schema).
Is there already a python library for this and I'm absolutely obvious about it?
How would you go about coding this? Thanks!
I'm looking for ideas for an Open Source ETL or Data Processing software that can monitor a folder for CSV files, then open and parse the CSV.
For each CSV row the software will transform the CSV into a JSON format and make an API call to start a Camunda BPM process, passing the cell data as variables into the process.
Looking for ideas,
Thanks
You can use a Java WatchService or Spring FileSystemWatcher as discussed here with examples:
How to monitor folder/directory in spring?
referencing also:
https://www.baeldung.com/java-nio2-watchservice
Once you have picked up the CSV you can use my example here as inspiration or extend it: https://github.com/rob2universe/csv-process-starter specifically
https://github.com/rob2universe/csv-process-starter/blob/main/src/main/java/com/camunda/example/service/CsvConverter.java#L48
The example starts a configurable process for every row in the CSV and includes the content of the row as a JSON process data.
I wanted to limit the dependencies of this example. The CSV parsing logic applied is very simple. Commas in the file may break the example, special characters may not be handled correctly. A more robust implementation could replace the simple Java String .split(",") with an existing CSV parser library such as Open CSV
The file watcher would actually be a nice extension to the example. I may add it when I get around to it, but would also accept a pull request in case you fork my project.
How can I edit a large JSON manually?
I have a large JSON file, about 100 MB. I'd like to manually inspect some attributes, and then add more attributes to some of the objects.
I'd start off by looking at a subset of the file. Say, the 1st 100 objects. I'd gradually scale up to looking then at maybe 250, then a thousand, etc.
Can someone suggest a language or software (I'm running Windows) that excels at this task?
Some previous suggestion that aren't working or can't work.
Sublime - Could never load the file. Loading bar forever. Had to kill.
NotePad++ - Could never load. Froze. Had to kill.
Anything online - The data is confidential.
More Python and Jupyter information.
with open(path, 'r') as f:
data = json.load(f)
for i, (k, v) in enumerate(data.items()):
print(i, k, v)
if i == 2:
break
Causes an error. I think it has to do with Jupyter, but I'm not sure.
IOPub data rate exceeded.
The notebook server will temporarily stop sending output
to the client in order to avoid crashing it.
To change this limit, set the config variable
`--NotebookApp.iopub_data_rate_limit`.
Current values:
NotebookApp.iopub_data_rate_limit=1000000.0 (bytes/sec)
NotebookApp.rate_limit_window=3.0 (secs)
That makes me wonder if going about it this way is just dumb.
Possible Solutions
Build a custom app using TKinter
Just don't use a Jupyter Notebook
What you can do is to write a simple GUI program. use TKinter, to create a window and a text area inside it to show the json, a text box where you will input, how many objects you want to see, and a button named Next or something to see next and one more button to save.The following will be the functionalities for each of the items.
First you will be reading complete json in python and making it a dict.
Next Button - This will keep iterating based on the value in the TextBox. you could write a custom generator, where it will be yielding based on the number of values required.
Save Button-: This will keep saving the current json into a new json or if you could, you can try to write a function to update the current json directly.
Text Area - you should take the dictionary and convert to json and show the output from the Next Button's generator.
If you are using linux (or have an opportunity to transfer the file to *nix) you might wish to check out for number of lines within a file via
wc -l myfile.json
Let's say, for the purpose of simplicity, that your file has 2530000 lines and you wish to split it into 100k lines each, you can utilize any of the commands available at your distro to split the file further into desired chunks and then to edit them, one by one.
If you are comfortable with going the "linux way", check out some of the hints given on other topics, i.e.
edit multi-GB file when vi editor doesn't work
I hope it helps!
The only viewer I have used that works on large files (I had up to 250MB size files) is Dadroit. It is fast to view and comes with search.
Now, to edit, I use vi. I search for the location and make local edits. Vim or another simpler editor should work on Windows. Have you tried vscode? 100MB shouldn't be too large for it.
The other awesome terminal tool for viewing and editing data is Visidata. I have had mixed luck with it working on json files.
Not the best answer, but the problem with reading the JSON seems limited to Jupyter Notebooks (or even the limitations of my laptop).
Working in Spyder or running from the command line circumvents the Jupyter error mentioned in the original question.
It'd be great if someone knew how to tweak Jupyter to avoid this problem (sorry, I'm not sure how yet).
for editor,try notepad++
for language, try Python
since you haven't give your data structure, I can't give more answer.
I would like to visualize data from csv file at node-red ui.
What I would like to do is to show behind a flag of a country the countity from the csv file. So into the csv file I have 2 columns (country, quantity).
Because of I am new at node-red I would like to get some hints how to do that.
Thanks in advance.
my flow with CSV data
Welcome to Node-RED!
Firstly you need to decide what kind of UI you would like. Node-RED has options for a number ranging from the creation of data driven web pages using the http-in/out and template nodes through the more dynamic but slightly more complex Dashboard through to full-power dynamic web-apps using things like node-red-contrib-uibuilder.
The very simplest approach is to use an http-in and an http-out node to define a web page. Then to add your file reader after the http-in then the CSV node (which turns the CSV data into JSON). Then you could use node-red-contrib-tableify to turn your JSON into an HTML table. Finally use the template node to insert the table into the html that the http-out node sends back to the browser.
http-in -> file read -> csv -> tablify -> template -> http-out
Once you've mastered that, you could go on either to smarten up the template or swap to using Dashboard or even uibuilder depending on your needs.
I have to create a web page first, right?
You define the URL in the http-in node. When the -in is connected to the -out, you have a "page". Albeit with no content. To create content you can use the template node. In fact, pushing the csv data through the tablify node and into the template would give you enough of a page to see the data. The templatate itself need only be:
<pre>{{payload}}</pre>
Though, of course, you can also wrap that with other HTML elements as needed. But that alone should be enough to render something useful.
How can I trigger the http-in?
You simply reference the URL from your browser. So if you set the http-in node to use URL /fred and you used a browser on the same device that is running Node-RED, you would use the URL http://localhost:1880/fred in your browser.
How should I design the web page to be able to put the information from the csv file into it by the http-out node?
The tablify node does that for you.
String together what I've outlined and you should see something that will let you go further.
I suggest just using http-in, template and http-out nodes to start so that you can see how they work together. Then feed in your data without the csv or tablify nodes, then add the csv and finally the tablify. That way you can see how things work.
Ansible allows devs
to write programs (in any language) that will return JSON describing the dynamic “snapshot” of current hosts. I’m using vSphere, which is currently not supported by Ansible OSS, and so I need to write such a "custom inventory plugin".
I can handle the querying of vSphere for a list of hosts, as well as constructing the JSON that is compatible with what Ansible is expecting.
Where the documentation completely (seemingly) falls flat is:
How do I “connect” Ansible with my inventory app? That is, say my inventory app is a simple bash script (inventory.sh)..how do I configure Ansible to call bash inventory.sh and obtain JSON from it? In reality the app will likely be a Java executable (inventory.jar) but I figure that if I can figure out how to get it working with bash, I can extrapolate to Java; and
How does Ansible actually capture/fetch the JSON back from the app? STDOUT? Is this all supposed to happen over an HTTP connection? Examples? How does inventory.sh or inventory.jar communicate that JSON back to Ansible?
The inventory script has to be located on the same machine where Ansible runs. It is not communicating through http, Ansible will simply parse the STDOUT of your program. The location does not matter at all, you have to pass the path to Ansible when you call Ansible:
ansible-playbook ... -i /path/to/your/inventory.sh
To avoid passing the inventory location every time you could add this to you ansible.cfg:
inventory = /path/to/your/inventory.sh
You could also copy the script to /etc/ansible/hosts, which is the default location Ansible will look for inventory files/scripts, but I prefer to keep things together so I suggest to place it close to your playbooks/roles etc.
And (3) Is any of this documented, anywhere? Don't see anything in the Ansible docs...
It is not mentioned on the page Developing Dynamic Inventory Sources but it is to be seen on some examples on the page Dynamic Inventory. The docs are community managed and from times litte unstructured and lacking important information.
BTW, there is a VMware inventory script included. By looking at the source I have seen it imports some vSphere stuff. I have little experience with VMware so I can't judge if this is actually what you need and don't need to write your own.
This is completely user defined. Typically you would write your dynamic inventory in Python and use a json dump of the output to create the inventory.
Here is an example for the use case you mentioned (vSphere): https://github.com/RaymiiOrg/ansible-vmware/blob/master/query.py
In a nutshell you create it like a normal Python file and create the options (as he does in main) and selectively execute functions based on which options are passed. These will make REST calls and return the output in the form of a JSON dump, which Ansible can parse for use in inventory.