Dash App Refresh Underlying Pandas Dataframe hourly - plotly-dash

I have a Dash App dashboard where the graphs are created from a Pandas Dataframe. The Dataframe is currently a global variable outside any callback/ function. The reason why it is outside is because it uses pyathena to query AWS Athena for data which takes a minute. What I need is to have the app refresh as a whole periodically so the Dataframe data can refresh, or have just the Dataframe part of the code refresh so the App can stay online. Either one works for me. I have looked into dcc.Interval() as well as scheduling, but no luck.

You probably want a dcc.Interval component as part of a long callback. Then you can update the dataframe inside the callback, but without worrying about timing out.

Related

Dynamic loading html

I‘m using node.js, express and handlebars.
Currently I‘m passing json via res.render() to hbs. In the hbs I‘m using {{#each dataJson}} to iterate over it and list the contents of my json. Now... what if my json had like a million entries? It would take forever to load my page then, right?
Now to my Question:
How do I load that dynamic? I want the page to load the html from the json while showing what‘s already loaded.
Sorry for my bad english. It‘s not my native language.
Best regards
Naggelus
You would need to implement a lazy loading mechanism.
Instead of loading all the million records on the initial load, you would need to send it in chunks. Based on the view port, the records can be decided and load the first page. When the user goes over those records, you need send another API request to fetch the next chunk and so on.
This would load only the necessary records to the UI.

Firebase: Exporting JSON Unable to export The size of data exported at a single location cannot exceed 256 MB

I used to download a node of firebase real-time database every day to monitor some outputs by exporting the .JSON file for that node. The JSON file itself is about 8MB.
Recently, I started receiving an error:
"Exporting JSON Unable to export The size of data exported at a single location cannot exceed 256 MB.Navigate to a smaller part of the database or use backups. Read more about limits"
Can someone please explain why I keep getting this error, since the JSON file I exported just yesterday was only 8.1 MB large.
I probably solved it! I disabled CORS addon in Chrome and suddenly it worked to export :)
To get rid of this, you can use Postman's Import feature because downloading a large JSON file sometimes faces failure in the middle of the way using a browser from the dashboard of the firebase. You can put the traditional cUrl commands on it. You just need to click save the response after the response is reached. To get rid of complex authentication complexity, you make the rule permission of the firebase database to read:true until the download is complete thought you need to ensure security for this. Postman also needs sometimes to preview the JSON even freezing the UI but you don't need to be bothered with it.

How can I use the CSV file loading functionality of Dygraphs myself (to load CSV data and then myself add a new series to it before chart rendering)?

Since Dygraphs apparently does not have any functionality for adding separate series of data to a chart one at a time (but rather only loading all the data series of a chart at once from a CSV file, or an in-memory array of arrays) I'm looking to make some code to do this myself.
My reason? My problem/scenario is that I have a "base file" containing a series of data many million values large. I will then need to show many separate charts that display this large data series TOGETHER with a bunch of other respective smaller data series, and I'd very much rather not duplicate the large dataseries in a new CSV file on disk for each such chart, but rather first load the big "base data series" from the CSV "base file" directly from my Javascript, and then for each such chart integrate one such smaller data series with it before sending it off to rendering by means of a new Dygraph(...) call.
The CSV file loading functionality that already obviously exists somewhere inside the Dygraphs code is very nice, so I'd very much like to use it for this loading of the large "base data series" if possible, from a single separate CSV file.
So, in short, the question is:
How can I use the existing CSV file loading functionality of Dygraphs separately from inside my own code, in order to load arbitrary CSV files into the Dygraphs chart data array format in-memory, so that I can finally merge these multiple data series arrays using my own custom code?
What I'm hoping for is something like this:
loaded_data_series_1 = some_secret_internal_function_or_method_of_dygraphs('file1.csv');
loaded_data_series_2 = some_secret_internal_function_or_method_of_dygraphs('file2.csv');
merged_data_series = my_own_custom_dataseries_merging_code(loaded_data_series_1, loaded_data_series_2);
g = new Dygraph(document.getElementById('my_chart'), merged_data_series,{});
The key here would thus be to know what some_secret_internal_function_or_method_of_dygraphs() should be replaced with for this to work.
Could the Dygraph devs or anyone else possibly point me in the right direction here?
(I tried to look inside the Dygraphs code myself, but unfortunately got lost pretty quickly due to insufficient Javascript coding skills on my side)

How to import JSON data to BigQuery

Currently I am working on a project that where I use an API to get JSON data. I got it to where I can request the browser, but I want to take the next step and place that data produced by the URL into big query. I was wondering if there was a way to use the terminal to do it?

How do you constantly monitor the contents of a file in clojure?

I have an incanter dataset that I would like to re-load every time another processes changes the source csv file. In other words, the mydata_ incanter dataset should be current every time I look. How can I implement this in idiomatic clojure?
(use 'incanter.io)
(def mydata_ (read-csv "./changingfile.csv"))
At some point, another process changes changingfile.csv, how do make sure that mydata_ is updated automatically? This is a bit different from just adding a watch function to an existing data structure within clojure.
Thanks.
nice library for watching the file system here: https://github.com/derekchiang/Clojure-Watch
can be used to watch the csv and can set mydata_ as an atom, or whatever uses mydata can be kicked off from clojure-watches callback.