I see that Google supports both standard files and shortcuts. I do not get which format should I create for storing my model permanently and if I can do that at all. Can I exploit google realtime to use as simple cloud storage (previous generations of programmers referred to the cloud storage as database)?
If you have binary contents, use a standard file. Otherwise, use a shortcut file.
If are using the realtime API and plan to store all data using it, then you can use a shortcut file and associate the realtime document with it.
I'm not sure what you mean by simple cloud storage, but you could use the realtime API to store arbitrary key value pairs.
Related
I am trying to join two input sources in the Google Cloud Platform, one from BigQuery and the other from Google Cloud Storage which contains a .csv file. I see using a joiner is the best option.
But I am curious whether the same can be achieved using the table lookup: column 'table' directive. The input records will be from BigQuery, and the 'table' will refer to the .csv file in Google Cloud Storage. Is it possible to achieve this with just Wrangler without using joiner?
Absolutely yes, you can use Wrangler instead of joiner to connect two data sources, you can apply basic transformations and export this information into a sink in Google Cloud Platform.
For your specific scenario using BigQuery for the input records and the 'table' from the .CSV file contained in Google Cloud Storage please check this tutorial which contains the specific steps on how to achieve it.
In my google console it says here Cloud Storage pricing
that the price for standard storage is $.026 per gigabyte month, which I think means that 500 gigs stored during one month will cost $13 since 500 * .026 = 13. But this article The Google Drive Price Cut Changes The Game For Personal Cloud Storage says:
Google is making a terabyte of cloud storage available for just $10
I don't see where you upload data to Google drive at Google cloud console.
My second question is that I want to make sure that I can create a virtual instance and connect it to Google drive or Storage and read the data from it and put that data into the RAM of the virtual instance.
Google Drive
is a web application which works as a file store allowing users to store files. Communication with it is normally done though the web application itself however developers can use the Google drive api to interact with google drive programticlly.
You may want to go though the documentation on the Google drive api to understand what its capable of.
Google cloud storage
is designed as a Unified object storage for developers and enterprises Cloud Storage allows world-wide storage and retrieval of any amount of data at any time. You can use Cloud Storage for a range of scenarios including serving website content, storing data for archival and disaster recovery, or distributing large data objects to users via direct download.
Interaction with this is done primarrly though the cloud console and command line tools.
I don't see where you upload data to Google drive at Google cloud console.
You dont cloud console wont help you upload to google drive.
My second question is that I want to make sure that I can create a virtual instance and connect it to Google drive or Storage and read the data from it and put that data into the RAM of the virtual instance.
Google drive is a web application you cant create a virtual instance of that.
You might want to go though a few of the quickstarts to understand how Google cloud console and the command line tool work Quick Starts
Can I query using the wildcard feature in BigQuery from external tables stored as CSVs on Google Cloud Storage?
The CSV files are in a Google Cloud Storage bucket and the files have different partitions / chunks of the data, like this
org_score_p1
org_score_p2
...
org_score_p99
Also, I expect that the number of files in bucket will continue to grow, so new files will be added with the same naming scheme.
Yes. However, you need make sure that
your Google Cloud Storage bucket is configured as multi-regional
your bucket's multi-regional location is set to the same place as the one where you are running your BigQuery jobs.
Otherwise you will get an error / exception similar to this one:
Cannot read and write in different locations: source: US-EAST4, destination: US
I want to use maps engine to show data in a map. The problem is that my data (kmz, csv, Mysql) is in a local server and because of internal politics I can't upload all this data to the cloud. I have seen that the Google Maps Engine API documentation talks about authentification for installed applications (https://developers.google.com/maps-engine/documentation/oauth/installedapplication). But does this mean that I can use Google Maps Engine locally? Can I use my local data in Google Maps Engine without uploading it to the cloud?
Thanks
Google Maps Engine is a cloud based application. You must upload your data to GME in order to make use of it. The link you reference is for oAuth - an authentication mechanism to provide access to GME maps requiring a user account. An installed application is, for example, a Windows app that uses the Maps Engine API.
If you can get over your cloud issue, you could use the Maps Engine API to write a connector from mySQL to Maps Engine relatively easily
In your situation you should probably look at geoserver.
I'm building a web app that integrates with Google Drive, and am wondering if there was a way to list, search or delete files.
I see from https://developers.google.com/drive/v1/reference/files#resource that there are 4 operations. If there are no list and search capabilities then the onus is on the app to handle the management of file ids.
Is there another API I should be using? Are those features in the works?
Drive API v2 was launched yesterday, which now supports full file operations including listing, searching, etc.
Check out the reference docs.