Storing GeoTIFFs in GeoMesa - geomesa

I want to store in GeoMesa GeoTIFFs and retrieve them with WMS. The idea is saving them in BlobStore according to http://www.geomesa.org/documentation/user/blobstore.html by parsing their spatial info using GDAL (http://www.gdal.org/formats_list.html).
But it seems that you cannot query with WMS data peristed in BlobStore (How to retrieve raster data in GeoMesa with a single query given tempospatial search criteria).
Moreover, what if I want to have temporal info for my GeoTIFFs? Where should I store them?

The blobstore does not support WMS. The blobstore has a pluggable indexing module which allows for handling temporal information in an application specific manner.

Related

How to store JSON in DB without schema

I have a requirement to design an app to store JSON via REST API. I don't want to put limitation on JSON size(number of keys,etc). I see that MySQL supports to store JSON, but we have to create table/schema and then store the records
Is there any way to store JSON in any type of DB and have to query data with keys
EDIT: I don't want use any in-memory DB like Redis
Use ElasticSearch. In addition to schema less json, it support fast search.
The tagline of ElasticSearch is "You know, for search".
It is built on top of text indexing library called "Apache Lucene".
The advantage of using ElasticSearch are:
Scalable to petabytes of data clusters.
Fully open source. No cost to pay.
Enterprise support available for platinum license.
Comes with additional benefits such as analytics using Kibana.
I believe NoSQL is best solution. i.e MongoDB. I have tested MongoDB, looks good and has python module to interact easily. For quick overview on pros https://www.studytonight.com/mongodb/advantages-of-mongodb
I've had great results with Elasticsearch, so I second this approach as well. One question to ask yourself is how you plan to access the JSON data once it is in a repository like Elasticsearch. Will you simply store the JSON doc or will you attempt to flatten out the properties so that they can be individually aggregated? But yes, it is indeed fully scalable by increasing your compute capacity via instance size, expanding your disk space or by implementing index sharding if you have billions of records in a single index.

Modify the response size of Google Cloud Datastore API requests

I am using Google Cloud Datastore, to store some data. When trying to extract all entities from my "Kind", through the API into GAS code, I realized that the API extracts 300 entities each time. To extract the totality of entities, I used the "cursor" option to fetch the next batch where the previous stopped.
Is there any way to extract all entities (or at least more than 300) at once?
While trying to find an answer in the web, I did not find any specific answer.
The max number of entities you can update/upsert in one go via Datastore's Data API is 500 but if you use a lookup operation you could potentially fetch 1000 entities (as long as they are collectively under 10MB for the transaction) as listed under the "Limits" section of Datastore's reference documentation.
However, you might be able to leverage the export/import endpoints of Datastore's Admin API to export/import data in bulk. Check out the guide for more information.

How to retrieve raster data in GeoMesa with a single query given tempospatial search criteria

According to How are GeoTIFFs persisted in GeoMesa?
GeoMesa's raster data is indexed by spatial extent solely.
Can I also save time info with the raster data? Else, for each raster I will have to persist another record holding its time info. Therefore, in order to retrieve my raster using a tempospatial query (is WMS capable for this? according to [1] it seems to be) I will have to retrieve both files; this means for x raster ==> x+1 GeoMesa hits (retrievals).
[1] http://docs.geoserver.org/stable/en/user/services/wms/time.html
Currently, no, you cannot save time info with the raster data.
WMS does support time and time range queries. That capability isn't wired up completely in the AccumuloRasterStore.
As an alternative, GeoMesa does allow for storing blobs and creating pointers to them on the way. The GeoMesa BlobStore doesn't allow for WMS access, but it is a nifty, extensible capability.

How can I store binary data in Orion ContextBroker? Is it possible?

I'm using Orion to store context information and I'm interested to store binary data (Array of bytes) in the attributes.
Is it possible in the current version (1.1.0)?
Thanks in advance.
The Short answer is no, it's not possible store binary data on version 1.1.0.
It's happens because Orion Context Broker uses a Restful API, all data are transported through text in XML format (very old versions) or JSON(latest version) and use MongoDB as storage enginer, MongoDB stores objects in a binary format called BSON. BinData is a BSON data type for a binary byte array. However, MongoDB objects are typically limited to 4MB in size. To deal with this, files are “chunked” into multiple objects that are less than 4MB each. This has the added advantage of letting us efficiently retrieve a specific range of the given file. But BSON data is not supported by Orion, and certainly will do not because Orion Context Broker was not designed to store store binary data.
You can use some alternatives:
Use a separate file server and reference it by URL or another server side technology, you can also use other Fiware GE's like CKAN or ObjectStorage for example
Convert binary data to hexadecimal, then it will be on a AlphaNumeric data, and on receive you can convert it back to binary data. There is some examples in Python, PHP, Java and C++ how to manipulate binary as hexadecimal.

how to store spatial files in MySQL

what is a better way to store spatial data in MySQL (say tracks)
internally or as references to the external flat files?
MySQL has a spatial extensions to store geographic objects (objects with a geometric attributes). More detail available there.
I would reccomend against mysql if you want to store it as explicitly spatial information. Instead I would reccomend Postgresql/PostGIS if you want to stay with Open Source DB. MySQL barely implements any of their spatial functionality. If you read the doc closely most spatial functions are yet to be implemented.
If you are don't care about explicitly spatial information then go ahead and store it directly in the DB.
If you give some more background on what you want to do we might be able to help more
The "better way" to store data depends on several factors which you, yourself need to consider:
Are the files rediculously large? +50MB? MySql can time out on long transactions.
Are you working on a closed network environment where the file system is secure and controlled?
Do you plan only to serve the raw files? There's no point in processing them into MySql format only to re-process them on the way out.
Is it expected that 'non technical' people are going to want to access this data? 'non technical' people generally don't like obfuscated information.
Do you have the capability in your applciation (if you have an applicaiton) to read the spatial data in the format that MySql stores it in? There's no point in processing and storing a .gpx or .shp file into MySql format if you can't read it back from there.
Do you have a system / service that will control the addition / removal / modification of the file structure and corresponding database records? Keeping a database and file system in sync is not an easy task. Especially when you consider the involvement of 'non technical' people.