I have a lot of data to store, so I want to know could indexedDB help me to compress
data?
https://github.com/google/leveldb README said
Data is automatically compressed using the Snappy compression library.
But, I had tested,data didnt been compressed in dev-tool display panel
Related
I am working with a large twitter dataset in the form of a JSON file. When I try to import that into Tableau, there is an error and the upload fails on the account of data upload limit of 128 MB.
Due to which I need to shrink the dataset to bring it to 128MB thereby reducing the effectiveness of the analysis.
What is the best way to upload and handle large JSON data in tableau?
Do I need to use an external tool for it?
Can we use AWS products to handle the same? Please advise!
From what I can find in unofficial documents online, Tableau does indeed have a 128 MB limit on JSON file size. You have several options.
Split the JSON files into multiple files and union them in your data source (https://onlinehelp.tableau.com/current/pro/desktop/en-us/examples_json.html#Union)
Use a tool to convert the JSON to csv or Excel (Google for JSON to csv converter)
Load the JSON into a database, such as MySql and use the MySql as the data source
You may want to consider posting in the Ideas section of the Tableau Community pages and add a suggestion for allowing larger JSON files. This will bring it to the attention of the broader Tableau community and product management.
I am new to GeoMesa. I mean I just typed geomesa command. So, after following the command line tools tutorial on GeoMesa website. I found some information on ingesting data to geomesa through a .csv file.
So, for my research:
I have a MySQL database storing all the information sent from an Android Application.
And I want to perform some geo spatial analytics on it.
Right now I am converting my MySQL table to .csv file and then ingest it into geomesa as adviced on GeoMesa website.
But my questions are:
Is there any other better option because data is in GB and its a streaming data, hence I have to make .csv file regularly?
Is there any API through which I can connect my MySQL database to geomesa?
Is there any way to ingest using .sql dump file because that would be more easier then .csv file?
Since you are dealing with streaming data, I'd point to two GeoMesa integrations:
First, you might want to check out NiFi for managing data flows. If that fits into your architecture, then you can use GeoMesa with NiFi.
Second, Storm is quite popular for working with streaming data. GeoMesa has a brief tutorial for Storm here.
Third, to ingest sql dumps directly, one option would be to extend the GeoMesa converter library to support them. So far, we haven't had that as a feature request from a customer or a contribution to the project. It'd definitely be a sensible and welcome extension!
I'd also point out the GeoMesa gitter channel. It can be useful for quicker responses.
I have been doing extensive research on GTFS and GTFS-Realtime. All I want to be able to do, is find out how late a certain bus would be. I can't seem to find where I can connect to, to properly search for a specific bus number. So my questions are:
Where/ how can I find the GTFS-Realtime file feed
How can I properly open the file, and make it location specific.
I've been trying to use http://www.yrt.ca/en/aboutus/GTFS.asp to download the file, but can't figure out how to open the csv file properly.
According to What is GTFS-realtime?, the GTFS-realtime data is not in CSV format. Instead, it is based on Protocol Buffers:
Data format
The GTFS-realtime data exchange format is based on Protocol Buffers.
Protocol buffers are a language- and platform-neutral mechanism for serializing structured data (think XML, but smaller, faster, and simpler). The data structure is defined in a gtfs-realtime.proto file, which then is used to generate source code to easily read and write your structured data from and to a variety of data streams, using a variety of languages – e.g. Java, C++ or Python.
I'm building a chrome app which requires a persistent and local database, which in this case can be either indexedDB or basic object storage. I have several questions before i begin developing the app:
Is it possible to persist indexedDB data after un-installation of the chrome app and chrome browser?
If the indexedDB file/data persist can i locate and view it?
If I can locate but can't view it, is it possible to change the location of the indexedDB file?
Can I store the indexedDB in a file located on desktop or any other custom location?
If I had these requirements, I see a couple of options that you might pursue
Write a simple database backed by the FileSystem API, and periodically lock the database and back up that file. This would be pretty cool because I don't know of anyone who has implemented a simple FileSystem API backed database, but I could see it being useful for other purposes.
Any edits to the database would be also made to a copy of the database stored on your backup server, and I would write functions that could import snapshots from your backup.
Simply write functions to export from your indexedDB to some format into a backup, and to import from the backup.
All options seem quite time consuming. It would be cool if when you create an indexedDB, you could specify an HTML FileSystem API entry file to back it, and that way you wouldn't have to do 1 or 2.
I agree that it seems like quite an oversight that an indexedDB is quite difficult to back up.
I am writing a basic browser only application. No back end server code at this time. So I also have storage requirements. But I am not doing backup. I am looking at pouchdb as a solution: http://pouchdb.com/
Everything is looking good so far. They also mention that they would work well with Google Apps.
http://pouchdb.com/faq.html#native_support
The nice thing is you could sync your pouchdb data with a server couchdb instance.
http://pouchdb.com/api.html#replication
http://pouchdb.com/api.html#sync
If you want to keep the application local to the browser with no server support you could backup the entire database by using a batch fetch.
http://pouchdb.com/api.html#batch_fetch
I would run the result through gzip before you put it on the filesystem.
I am currently attempting this very same thing. I am using the Chrome Sync File System Api (http://goo.gl/5q8Z9M), but running into some instances where my file (or its contents) is deleted. With this approach I am writing out a JSON object. Hope this helps.
Is there any plan to make it possible to download (or synchronise from) a 'pre-built' database file, so to speak, for use with a local web browser database like WebSQL or IndexedDB?
At the moment, to add or update a local database it's necessary to export or store data in a format such as XML or JSON, then get, parse and store the data.
I am under the impression that what you're looking for would be too much over-standardization on the browser side. As I understand it, IndexedDB is meant to be simple and robust enough for anyone to write JavaScript code that does the synchronization to your database server of choice.
In the meantime, you might take a look at these projects:
PouchDB - An implementation of CouchDB on top of IndexedDB. One of the premises is to offer the same synchronization (master-to-master) decentralized capabilities of CouchDB on the browser.
BrowserCouch - A similar project but using WebSQL as browser storage.