I'm trying to retrieve the data from this page, and this is the specific json file associated with it.
I cant seem to extract any meaningful data about the facilities, and it seems like the json file is just a description with no observations.
I'm trying to import it using R packages. I've tried JSON viewers. The documentation page from the first link seems to be in XML.
To get data from that url:
library(jsonlite) #install these if need be
library(curl)
my_dat <- fromJSON('http://catalog.data.gov/harvest/object/5630b80a-23dc-483f-8164-876dac1a9757')
Now you have a list called my_dat with the contents
> my_dat
$`#type`
[1] "dcat:Dataset"
$accessLevel
[1] "public"
$bureauCode
[1] "029:40"
$contactPoint
$contactPoint$`#type`
[1] "vcard:Contact"
$contactPoint$fn
[1] "Kevin Reid"
$contactPoint$hasEmail
[1] "mailto:vawebservices#med.va.gov"
$description
[1] "The facilities web service provides VA facility information. The VA facilities locator is a feature that is available across the enterprise, on any webpage, for the Department of Veterans Affairs. It is comprised of data from across the different facilities, and is updated multiple times a day by multiple personnel. This API gives external users the ability to interact with the most up to date information about VA facilities."
$distribution
#type accessURL format title
1 dcat:Distribution http://www.va.gov/webservices/fandl/documentation/fandl.cfm API VA Facilities & Leadership
$identifier
[1] "VA-OIT-WSG-01"
$keyword
[1] "Building Information" "Facilities" "Location" "VA"
If that's not the content you were expecting, it's an issue with the data source. But that's a way to ingest a JSON from a URL to R.
Related
Where can I find a guide for complete UPS Shipment JSON developer guide for shipment?
I know that they have a pdf guide for shipment, but I think not all fields are present on their JSON Developer pdf guide. Like the field number of package is missing. And is there a field where you can put the package value like 100 USD.
And It's not well explained. :(
UPS uses the same query structure everywhere.
The description of all fields is in the "Shipping Package Web Service Developer Guide.pdf". To use JSON, you just need to send a request to the URL specified in the "Shipping Package JSON Developer Guide.pdf".
How should the Chatbot conversation raw data excel csv look like?
Now I am building a Chatbot with IBM Watsons. The client would want the chatbot conversation raw data export in excel/csv format. As the conversation is very long, How should we get the most appropriate format for the Chatbot conversation raw data csv?
Thanks!
Based on your question, I guess you should use Logs/ examples on the API Reference for base what you CSV file gonna have, for example: inputText, outputText, conversation_id, dateConversation, intents, entities, context, workspace_id, etc.
In this example using Python that will return these data (you can also use another language that has HTTP request support):
List logs:
list_logs(self, workspace_id, sort=None, filter=None, page_limit=None, cursor=None)
Request:
import json
import watson_developer_cloud
conversation = watson_developer_cloud.ConversationV1(
username='{username}',
password='{password}',
version='2018-02-16'
)
response = conversation.list_logs(
workspace_id = '9978a49e-ea89-4493-b33d-82298d3db20d'
)
print(json.dumps(response, indent=2))
Obs.: Take a look at this example saving inside one txt file the conversation Logs, you can also check this answer to use the same logic to save in one csv file.
Obs.: Nowadays Watson conversation changed the name for Watson Assistant.
Official Watson Conversation API Reference (cURL, Node, Java, Python examples)
I currently have a program that scrapes the text and author and time stamp from the news websites I visit and pushes it to a firebase database in .json format. I'm now pulling that data into r to perform all the fun sentiment & nlp analysis. When i first started I had the data public and, using curl, I could easily pull the .json data into r. Code:
library(RJSONIO)
library(RCurl)
# grab the data
raw_data <- getURL("https://[PROJECT_ID].firebaseio.com/.json)
# covert from JSON into a list in R
dataJSON <- fromJSON(raw_data)
I have since switched the firebase database from public to private and now when i run the above code I receive a "Permission Denied" error. The question I have is how can I access my json data now that I have switched from public to private?
I created a Service admin account in firebase and generated a private key after reading that would give me an auth token that I could use in something like this format:
curl 'https://[PROJECT_ID].firebaseio-demo.com/users/jack/name.json?access_token=<TOKEN>'
But in the private key there is no "token".
If anyone has any insight or recommendations it'd be hugely appreciated!
Thanks,
Ben
I'm doing a small task that requires the usage of GitHub's APIs.
Most of the data can be retrieved from the main API, but some specific information is an API itself.
Here's how the main API is read by R, using jsonlite package:
The next task is to tell R to sum the number of labels that are 0 in each of those rows of that column "url". For that, R needs to read each API separately.
I'm trying to do something like this:
library(jsonlite)
issues <- fromJSON("THE API LINK")
jsonapis <- issues$url
apis <- fromJSON(jsonapis)
That retrieves the following error message:
Error: Argument 'txt' must be a JSON string, URL or file.
To use with URL, I'd have to copy paste each one manually - I think - and all suggestions I found only in regard to 'file' do not fit.
I have another task similar that involves reading from many APIs simultaneously, but I presume I'll be able to do finish that if I can understand this one.
Can someone help me?
I apologize if this is not very clear, but this is the best I can do. Unfortunately, I'm not too comfortable in these topics and APIs is really a pain to me.
Thanks in advance!
I am building an application that can import JSON data, I want to test about 10k entries, and I don't feel like building a JSON string with that many entries.... so does anyone have a location where I could find some generic populated JSON files? (Music Albums / Movie Listings / Animal Kingdom / Census data / Car Models... I'm not horribly picky, I just need some good data to test with.)
This is a site that is in beta that can give you data in JSON, XML or CSV. All lists are customizable. This is a sample call: http://mysafeinfo.com/api/data?list=englishmonarchs&format=json
Documentation here: http://mysafeinfo.com/content/documentation ; See a full list under Datasets on the main menu
There are few websites that have raw data that can be customized to a JSON format. Here is a link for some raw data information Data
Vienna's Open Data website provide various databases. For example, their Tree register
Edit:
The old link is dead. A great place to find test data is Socrata which aggregates government data. There's plenty of data to test with:
https://dev.socrata.com/data/
If you look through the catalogs, there's usually a JSON export option. Here's an example from the National Service for Age Group and volunteering.