Is it possible to use keplergl without Mapbox? - data-analysis

I want to do some spatiotemporal data analysis with kepler.
As the data is confidential and also huge, i cannot upload it in the demo site.
Tried installing keplergl locally but it needs a Mapbox Access Token (which I think is paid).
Is there a way I can use kepler with OpenStreetMap. (I want to run it in jupyter notebook or through python OR a one-time react setup, since I am not familiar with react).
Also when I use Kepler in jupyter notebook empty map loads as shown below:
Without data map
as soon as I load data it goes away:
With data map
here is console output:
error
My jupyter notebook is also configured with
jupyter nbextension install --py --sys-prefix keplergl
jupyter nbextension enable keplergl --py --sys-prefix
Is there a way to fix the jupyter notebook error OR is there an alternative open-source tool like keplergl.
Saw some solutions using Tile with docker but was not completely clear. I would prefer a way in which I can use it in jupyter notebook.
Tried pydeck of DeckGL, wasn't very interactive.
For example, I have multiple columns in my database and kepler lets me filter based on each column.
Also, kepler allows one to select which different columns for weights.
If pydeck allows it can you please say how?

Related

FastAI fastbook - what does it do and why do I need to setup a book?

I tried running on my google colab notebook:
!pip install -Uqq fastbook
import fastbook
as it is written in the FastAI book, chapter 2.
but nor the book or anywhere on google there is an explanation on what is this liberty at all.
amazingly, the page for it does not include any explanation on what fastbook does- only about some course for deep learning.
so,
what does it do?
also, when I run:
fastbook.setup_book()
what does that do? in which way does it setup a book, and what kind of book is it?
ty.
fastbook.setup_book()
It is used setup when you are using google colab specifically and working with FastAI library. It helps to connect the colab notebook to google drive using an authentication token.
fastbook relates to the book/course.
The materials in the book use fastai, but also other libraries, e.g. pandas graphviz etc.
fastbook has almost no library code itself, it mostly contains the dependencies one would need to follow the course. Plus, the book itself, of course.
In other words: to run the code from the book/course, you will either need to install fastai pandas graphviz ..., or simply install fastbook.
Watch the author himself talk about this: https://youtu.be/B6BQiIgiEks?t=441
During the course, you will download gigabytes of data: images/datasets/pretrained models. As well as more generally work with storage. You provide it from your GDrive.

Octave runs but graph not displayed

num=[1];
den=[1 3 1];
G=tf(num,den);
H=1;
T=feedback(G,H);
step(T);
hold on;
Kp=23;
Ki=0;
Kd=0;
C=pid(Kp,Ki,Kd);
T=feedback(C*G,H);
step(T);
When run this script nothing happen in Octave but works fine in octave-online.net
online octave
Octave Windows
I will put a proper answer here for future users, even though OP has already solved their problem from the comments.
octave-online.net is an excellent cloud service providing an instance of octave on the cloud.
Contrary to a typical installation of octave on linux or windows, the octave-online client autoloads some of the more popular packages, one of which is control.
You can confirm this by typing pkg list in the octave-online console.
In your normal linux / windows installation however, this needs to be loaded explicitly before use, e.g. in the case of the control package, by doing pkg load control.
Your code uses the functions feedback and pid, both of which rely on the control package, therefore in your windows instance, your code failed, because you tried to use these functions without loading the package first.
Presumably there was also an error in your terminal informing you of this fact, that you may have missed.

Port TensorFlow code to Android

I have written a script for sequence classification using TensorFlow in Python. I would like to port this code to Android. I have seen the example on the TensorFlow github page regarding Android but that is for images.
Is there any way to directly port my TensorFlow Python code on Android?
The typical way to do this is to build (and train) your model using Python, save the GraphDef proto to a file using tf.train.write_graph(), and then write an app using the JNI to call the C++ TensorFlow API (see a complete example here).
When you build your graph in Python, you should take note of the names of the tensors that will represent (i) the input data to be classified, and (ii) the predicted output values. Then you will be able to run a step by feeding a value for (i), and fetching the value for (ii).
One final concern is how to represent the model parameters in your exported graph. There are several ways to do this, including shipping a TensorFlow checkpoint (written by a tf.train.Saver) as part of your app, and running the restore ops to reload it. One method, which has been used in the released InceptionV3 model is to rewrite the graph so that the model parameters are replaced with "Const" nodes, and the model graph becomes self contained.
There is QPython or Kivy.
QPython - Android Apps on GooglePlay. It's a script engine that runs Python on android devices. It lets your android device run Python scripts and projects. It contains the Python interpreter and some other stuff like pip, but there's no compiler available, so only pure-python packages will work.
Python for Android - lets you compile a Python application into an Android APK together with additional packages both pure-python and those that need compiling.

Using graphs generated my Munin in a Swing application

I'm able to generate a usage statistics graph in my browser for a particular PC using Munin. The problem is that I want to use the graphs in a Swing application where the graphs will be displayed. Is there any way to do so? What are the other options available to generate the same graphs on Swing? Do I have to manually generate the readings and plot the graph accordingly?
As Munin is written in perl, it should possible to use ProcessBuilder to evoke the desired graph. A related example is seen here.
Alternatively, it may be possible to install Munin locally and fetch the image as suggested in this example.

Remote CUDA profiling?

Is it possible to remotely execute a CUDA profile execution (similar to computeprof) and then bring the profile back for analysis?
The particular remote machine is headless and not-under-my-control, so no X, no Qt libraries, etc.
Yes you can. The CUDA driver has built-in profiling facilities. How to do it is discussed in the Compute_Profiler.txt file you will find in the doc directory of the toolkit, but the basic idea is something like this:
$ COMPUTE_PROFILE=1 COMPUTE_PROFILE_CSV=1 COMPUTE_PROFILE_LOG=log.csv COMPUTE_PROFILE_CONFIG=config.txt ./app
which tells the runtime to turn on profiling, use csv format output written to log.csv, including the profile statistics read from config.txt. After the app has run, the runtime will drop an output file with the raw profiling results in them. You can then use the tool of your choice to look at them. The visual profiler can be convinced open to the output, but a lot of the fancy synchronization it does requires the output to be generated using its own profile configuration files (under the hood it is dynamically doing the same thing you do manually, but on the fly). I have done some digging around and scraped copies of the configuration files so I could regenerate specific application profiling runs without the profiler on headless cluster nodes. Not too much fun, but it can be done.