How to download full datasets with Socrata SODA api - socrata

Many data sets hosted on Socrata don't allow access via the normal api - only a zip file of the full data set, along with some possible attachments, is available. Is there any way to download this full data set via the SODA api? I'd like to, for example, only download the data set if the metadata that I can see via the discovery api says that the data set has been updated.

Unfortunately not. Those are special datasets that are hosted as downloadable assets rather than as data in the Socrata API, so you'll need to fetch it from the asset's landing page. If you need to, you should be able to screen scrape it, however.

Related

dowloading photos uploaded as google form answer with a name

I have created a google form where I collect info about workers (name famil photo etc...).
I want to download all photos uploaded through the form to my PC but I want each photo downloaded to have a name suitable to the name and family answers that the worker submitted , i.e. _.
This will be used to produce magnetic cards automatically without the need to go over each one and choose his image manually.
For uploading file, you can use the method specified in the documentation.
The Drive API allows you to upload file data when creating or
updating a File resource.
You can send upload requests in any of the following ways:
Simple upload: uploadType=media. For quick transfer of a small file (5 MB or less). To perform a simple upload, refer to
Performing a Simple Upload.
Multipart upload: uploadType=multipart. For quick transfer of a small file (5 MB or less) and metadata describing the file, all in
a single request. To perform a multipart upload, refer to
Performing a Multipart Upload.
Resumable upload: uploadType=resumable. For more reliable transfer, especially important with large files. Resumable uploads are a good
choice for most applications, since they also work for small files at
the cost of one additional HTTP request per upload. To perform a
resumable upload, refer to Performing a Resumable Upload.
Most Google API client libraries implement at least one of the
methods. Refer to the client library documentation for additional
details on how to use each of the methods.
Then you can manipulate your metadata,
Much of the information you'll need to get started inserting and
retrieving files is detailed in the Files reference. Here are a
few more important considerations for naming files and working with
metadata like thumbnails and indexable text.
Next, here is how to Specify file names and extensions
Apps should specify a file extension in the title property when
inserting files with the API. For example, an operation to insert a
JPEG file should specify something like "name": "cat.jpg" in the
metadata.
Subsequent GET requests include the read-only fileExtension
property populated with the extension originally specified in the name
property. When a Google Drive user requests to download a file, or
when the file is downloaded through the sync client, Drive builds a
full filename (with extension) based on the title. In cases where the
extension is missing, Google Drive attempts to determine the extension
based on the file's MIME type.

Forge Viewer. Retrieving from Revit

I set upp my App to retrieve properties information from Revit, meaning it currently retrieves Constraints, Identity Data, Phasing, etc.. But I want it to retrieve documents, links, images as well.
I created this Parameter to test:
Type Parameter: Image
Group Parameter under: Graphics
Then I added this parameter to an object and uploaded a .png. When I check the result inside the App, only the "Name" of the parameter under "Graphics" appears. No content. Only blank. Is it supposed to be like this or is there anything I could do to make the file upload work?
The best way to handle that at the moment would be to write your custom Revit addin that can connect to Forge and upload your model. When doing so, using the Revit API, you could parse the model properties and store the embedded pictures and documents to your own cloud database/storage.
When loading the model in the viewer, you would load a custom extension that does something alike the blog post you are referring to, connecting to your own database and showing the embedded content.
More elaborated demos of MetaProperties are available here and here with full source code here.
Hope that helps

What API to use for Offline Chrome App

I want to develop an offline chrome application.
As in offline app SQL is not available , so what API can serve the following purpose.
=>Large Storage
=>Efficient method to set and get values
=>Fast
=>Secured (user cannot temper the data)
Confused between IndexDB and File System API
I have knowledge of web languages and how online apps can store data on server. But don't know much about how to save data offline.
It all depends on your needs.
The Chrome apps have couple of limitations. Because they must to be very fast some web API's are disabled. You can't use localStorage and webSql for example.
However in apps you have different set of storage options:
chrome.storage.local - equivalent for localStorage but asynchronous. you can also save/read many objects at once
chrome.storage.sync - same as above but data are shared between different app instances (on other browser's profiles or machines)
web filesystem API - well known web filesystem API that can keep any kind of file in protected, browser storage. User's do not have direct access to this files, only the app have
extension to the above: chrome.syncFileSystem - it works similar to the above but files saved using this API are synced between app's instances (e.g. different machines) using Google Drive as a back-end. However user's can't see synced files in Drive UI because they are hidden.
chrome.fileSystem API - another extension to the web filesystem API and it gives you access to the user's sandboxed local filesystem. You can read from and write to selected by the user locations.
IndexedDB - quoting the docs: IndexedDB is an API for client-side storage of significant amounts of structured data, which also enables high performance searches of this data using indexes.
other custom solutions saving data on some server and syncing changes in all instances
You can choose one of above. As I can see you'll probably want to use IndexedDB API. It is not SQL and it is different approach to saving data. If you never use it before try some sample app first. However it's fast, efficient and combining with unlimitedStorage permission also can set large amount of data.
I also suggesting you to read Offline First page in Chrome Apps documentation where are examples of solutions for making an app offline.

Transfer a google Drive to a different user in a different subdomain

I am looking to transfer users from one domain to a different domain within our Google Apps. We don't want users to manually move their files and we cannot have our admins transfer them (we have 50K + users to move).
Is there any way within Google's API's that I can program a method to transfer files within an executable? Please if there is a way to do it in .Net that would be most desired.
See the permissions.insert() reference. There's a .Net sample on that page that should get you started.

Google Drive Live API: Server Export of Collaboration Document

I have a requirement to build an application with the following features:
Statistical and Source data is presented on simple HTML pages
Some missing Source data can be added from that HTML page ( data will be both exact numerical values and discriptive text )
Some new Source data can be added from those pages
Confirmed and verified data will NOT be editable via the HTML interface
Data is stored and made continuously available via the HTML interface
Periodically the data added/changed from the interface needs to be pulled back into the source data - but in a VERY controlled way. All data changes and submissions will need verification and checking - and some will trigger re-runs of models ( some of which take hours to run ).
In terms of overview architecture I have:
Large DB that stores and manages the data - this is designed for import process's and analysis. It is not ideal for web presentation or interface
Code servers that manipulate the data for imports and analysis
Frontend server that works as a proxy to add layer of security to S3
Collection of generated html files on S3 presenting the data required
Before reading about the Google Drive Realtime API my rough plan was to simply serialize data from the HTML interface and post to S3. The import server scripts would then check for new information, grab it, check it, log it and process it into the main data set.
That basic process however would mean that once changes were submitted from the web page - they would be lost from the users view until they had been processed by the backend.
With the Google Drive Realtime API it would appear I could get the best of both worlds.
However for the above to work I would need to be able to access the Collaboration Document in code from the code servers and export the data.
The Realtime API gives javascript access to Export and hand off to a function - however in my use case I want to automate the Export from the Collaboration Document.
The Google Drive SDK does not as far as I can see give any hints on downloading/exporting a file of type "Collaboration File".
What "non-browser-user" triggered methods are there for interfacing with the Collaboration Documents and exporting them?
David
Server-side export is not supported right now. What you could do is save the realtime model to a regular drive file, and read from that using the standard Drive API. See https://developers.google.com/drive/realtime/models-files for some discussion on different ways to setup interactions between realtime models and Drive Files.