GetOrgChart JSON format - json

A little startup I am doing work for is searching for a JavaScript Org Chart, and we believe we'd like to use "GetOrgChart" from getorgchart.com.
We definitely have a working back-end already that provides data to the front-end via RESTful services and provides JSON data.
We know the GetOrgChart can be loaded with data from various sources, and in this case we'd like to know what format the JSON has to be in?
Are there any examples out there of how the JSON should look like?
We'd definitely like to download and register this product, but that is one of the questions we'd like to get answered.
Thanks!

On their demos page, you can click the 'Get HTML Code' link (upper right, below the site header) which opens the javascript used to render the demo, including the format of the data.

Related

Extract JSON Data From ThingSpeak API

so i want to get the value from one oof my fields in my thingspeak, i'm able to extract data from my channels but i want to get only one specific field
i read the documentation and the api link that looks like this
https://api.thingspeak.com/channels/<channel_id>/feeds.json?results=1
and when i opened the link it showed this
{"channel":{"id":1688112,"name":"ESP8266 - Web Controlled LED","latitude":"0.0","longitude":"0.0","field1":"Command","field2":"Red LED","field3":"Green LED","field4":"Blue Led","created_at":"2022-03-29T00:36:06Z","updated_at":"2022-04-06T03:12:36Z","last_entry_id":443},"feeds":[{"created_at":"2022-04-10T07:06:01Z","entry_id":443,"field1":null,"field2":"0","field3":"0","field4":"0"}]}
so my question is how do i extract the data for example from my field2 data where "field2":"0"?
i want to use it for my project in my html where later it can do some functions in my content.
thanks!
It really depends on the program you use.
But usually you find a JSON library to be installed in your IDE.
With it you extract any field from the JSON file

How to add XML data to Wix page?

I am trying to create a webpage or page element that will read and display the data from an external XML data feed. I can't seem to find documentation on their site that will help and I am very new to this.
This is the XML url generated: https://spacedout.ampsuite.com/xml/releases?cid=2&s_date=2018-01-01&e_date=2019-01-11&order=release_date&dir=desc&limit=10
And this is an example of how I would like it displayed: https://client.ampsuite.com/
Pretty much just the section under "featured releases" that lists current music releases.
You can use wixCode backend function in order to do that. All you need is to use the wix-fetch API to get the data, then you can parse the XML using xml-js (which is a node module you can install in the backend).
In your client code you'll need to call your backend function and then inject the results to something like a repeater / table element on your UI.

trying to load data from url in R

so I want to load all the formatted data from this url: https://data.mo.gov/Government-Administration/2011-State-Expenditures/nyk8-k9ti
into r so I can filter some of it out. I know how to filter it properly once I get it, but I can't get it "injected" into R properly.
I've seen many ways to pull the data if the url ends in ".txt" or ".csv", but if this url doesn't end in a filetype, the only way I know how to get it is to pull the html, but then I get... all the html.
there are several options to download the file as a .csv and inject it that way, but if I ever get good enough to do real work, I feel like I should know how to get this directly from the source.
The closest I've gotten is using the function:
XML content does not seem to be XML: 'https://data.mo.gov/Government-Administration/2011-State-Expenditures/nyk8-k9ti'
but i get an error that says
XML content does not seem to be XML: 'https://data.mo.gov/Government-Administration/2011-State-Expenditures/nyk8-k9ti'
so that doesn't work either :(.
If anyone could help me out or at least point me in the right direction, I'd appreciate it greatly.
It is quite complicated to scrape the data from the table but this website provides a convenient .json link file which you can access quite easily from R. The link https://data.mo.gov/resource/nyk8-k9ti.json can be found from Export -> SODA API.
library(rjson)
data <- fromJSON('https://data.mo.gov/resource/nyk8-k9ti.json')
I believe your question could be more precisely defined as "How to scrape data from website" rather than just simply loading data from an URL in R. Web scraping is totally another technique here. If you know some Python, I recommend you to take this free course teaching you how to get access to data on website via Python. Or, you can trythis website to get what you want, though, some advanced tools are not free. Hope it helps.

Howto style a Resultpage of a Connector?

I have created a "connector" with a very nice tool called import.io which allows me to do a search inquiry by a other website and gets me an resultlist. I followed an other article by stackoverflow.com to do this:
basic import.io html search
This works well. But my question now:
How i style my HTML(Resultlist) with CSS like on this site?
Thanks
To get the data from your API into a web page you need to access the API via a programming language or script. Once you have the API return the Data as Json, you could try something like http://json2html.com/ to convert the Data into HTML and write that to your page.
Alternatively you could download the data as CSV, open it in excel and wrap html tags around the data and copy paste that into your website. its not idea, but at least you can get the data online.

Using a JSON file instead of a database - feasible?

Imagine I've created a new javascript framework, and want to showcase some examples that utilise it, and let other people add examples if they want. Crucially I want this to all be on github.
I imagine I would need to provide a template HTML document which includes the framework, and sorts out all the header and footer correctly. People would then add examples into the examples folder.
However, doing it this way, I would just end up with a long list of HTML files. What would I need to do if I wanted to add some sort of metadata about each example, like tags/author/date etc, which I could then provide search functionality on? If it was just me working on this, I think I would probably set up a database. But because it's a collaboration, this is a bit tricky.
Would it work if each HTML file had a corresponding entry in a JSON file listing all the examples where I could put this metadata? Would I be able to create some basic search functionality using this? Would it be a case of: Step 1 : create new example file, step 2: add reference to file and file metadata to JSON file?
A good example of something similar to what I want is wbond's package manager http://wbond.net/sublime_packages/community
(There is not going to be a lot of create/update/destroy going on - mainly just reading.
Check out this Javascript database: http://www.taffydb.com/
There are other Javascript databases that let you load JSON data and then do database operations. Taffy lets you search for documents.
It sounds like a good idea to me though - making HTML files and an associated JSON document that has meta data about it.