As Azure blob storage is a very cheap data storage solution, would it make sense to store JSON data in blobs so clients can access it like a data API? This way we don't need to spin up Web Apps/APIs to server JSON data.
That could work, depending on the scenario. The blobs will be updated on-the-fly when you push a new version of the JSON files.
I demonstrated this a while ago with a simple app that uploads and updates a file. Clients could target the URL and for them is seemed like they were accessing a JSON data feed that kept being updated.
Related
I'm building an architecture using boto3, and I hope to dump the data in JSON format from API to S3. What blocks in my way right now is first, firehose does NOT support JSON; my workaround right now is not compressing them but it's still different from a JSON file. But I still want to see a better choice to make the files more compatible.
And second, the file names can't be customized. All the data I collected will be eventually converted onto Athena for the query, so can boto3 do the naming?
Answering a couple of the questions you have. Firstly if you stream JSON into Firehose it will write JSON to S3. JSON is the file data structure and compression is the file type. Compressing JSON doesn't make it something else. You'll just need to decompress it before consuming it.
RE: file naming, you shouldn't care about that. Let the system name it whatever. If you define the Athena table with the location, you'll be able to query it. When new files are added, you'll be able to query them immediately.
Here is an AWS tutorial that walks you through this process. JSON stream to S3 with Athena query.
Inside ADF I'm trying to get the ready-made contents of a query for a GraphQL API (Web activity block) stored in a JSON somewhere in the blob. Because of speed requirements, we can't afford to just spin up Databricks every single time.
What can be done to get the content, not metadata of a JSON file and store it inside a ADF variable that would parametrize further pipeline blocks (the path to the file is known, fixed, and the file is accessible via a linked service)?
I would go with creating meta data Azure SQL database (basic cost only 5 usd per month). It can be connected via private link with Azure Data Factory. This is simplest and fastest way. You just save data there and later fill dataflow etc. parameters with results from that database.
I'm currently thinking of a concept for a react-native app where people add events over a period of time like a diary/log. These events need to be exported and for security and privacy reasons I don't want to use a database. I've read you can use JSON files as a storage method, too.
How can I store data from the app to a JSON file and load the data from the JSON file back in the app? Don't need any code, helpful articles or webpages are appreciated
Assuming that you already have all the app data into a json, its rather simple.
Decide where to store the appdata.json, lets call it APP_DATA_PATH
Find a library to read/write files (I've only used expo-file-system)
On app boot, check if APP_DATA_PATH exists, if it does read the file and load into app, if not assume its a new user
Whenever app data changes, write the changes to APP_DATA_PATH
I want to get the json file in Azure Storage Blob through the browser.
I used Stream Analysis and comes out a json file in the Blob container. Now i need to get the information inside the json file in order to show the IOT device status in real-time.
I tried to use Jsonp,
but I don't know how to add the callBack method in the Json file without download it. Is there any way to add the callBack method??
or Is there another way to get the information inside the container?
for this particular scenario, I'd recommend PowerBI. Now Stream Analytics have direct output to PowerBI and you can pretty much customize the dashboard for your real time IoT needs.
You can refer to this article for step by step Stream Analytics + PowerBI.
Coming back to your question, you need to download the blob to access the content. Stream Analytics to BLOB is usually for archiving or later predictive analysis scenarios.
Instead if you still prefer not to use PowerBI, I'd either arrange the SA output to an event hub and read the data from there in real time or alternatively save the data into a NO-SQL db like DocumentDB on Azure and then read from there. I can recommend Highcharts if you want to use custom gauges etc to visualize the data.
Hope this helps.
Any one has done this, i want to get data parse from the JSON (Google location API) and store it into the sqlite databse on the iphone.
Problem is that if the parsed data is huge in amount how to synchronize the parsing and saving data in sqlite locally on the iphone.
And user interface includes table of saved data that should be work with out any interruption.
And the solution should be in COCOA Framework using Objective C
You must read some tutorials
How do I parse JSON with Objective-C?
http://cookbooks.adobe.com/post_Store_data_in_the_HTML5_SQLite_database-19115.html
http://html5doctor.com/introducing-web-sql-databases/
Parse JSON in JavaScript?