FastAI fastbook - what does it do and why do I need to setup a book? - deep-learning

I tried running on my google colab notebook:
!pip install -Uqq fastbook
import fastbook
as it is written in the FastAI book, chapter 2.
but nor the book or anywhere on google there is an explanation on what is this liberty at all.
amazingly, the page for it does not include any explanation on what fastbook does- only about some course for deep learning.
so,
what does it do?
also, when I run:
fastbook.setup_book()
what does that do? in which way does it setup a book, and what kind of book is it?
ty.

fastbook.setup_book()
It is used setup when you are using google colab specifically and working with FastAI library. It helps to connect the colab notebook to google drive using an authentication token.

fastbook relates to the book/course.
The materials in the book use fastai, but also other libraries, e.g. pandas graphviz etc.
fastbook has almost no library code itself, it mostly contains the dependencies one would need to follow the course. Plus, the book itself, of course.
In other words: to run the code from the book/course, you will either need to install fastai pandas graphviz ..., or simply install fastbook.
Watch the author himself talk about this: https://youtu.be/B6BQiIgiEks?t=441
During the course, you will download gigabytes of data: images/datasets/pretrained models. As well as more generally work with storage. You provide it from your GDrive.

Related

Is it possible to use keplergl without Mapbox?

I want to do some spatiotemporal data analysis with kepler.
As the data is confidential and also huge, i cannot upload it in the demo site.
Tried installing keplergl locally but it needs a Mapbox Access Token (which I think is paid).
Is there a way I can use kepler with OpenStreetMap. (I want to run it in jupyter notebook or through python OR a one-time react setup, since I am not familiar with react).
Also when I use Kepler in jupyter notebook empty map loads as shown below:
Without data map
as soon as I load data it goes away:
With data map
here is console output:
error
My jupyter notebook is also configured with
jupyter nbextension install --py --sys-prefix keplergl
jupyter nbextension enable keplergl --py --sys-prefix
Is there a way to fix the jupyter notebook error OR is there an alternative open-source tool like keplergl.
Saw some solutions using Tile with docker but was not completely clear. I would prefer a way in which I can use it in jupyter notebook.
Tried pydeck of DeckGL, wasn't very interactive.
For example, I have multiple columns in my database and kepler lets me filter based on each column.
Also, kepler allows one to select which different columns for weights.
If pydeck allows it can you please say how?

Export a KNIME workflow as a standalone application or JAR

Is there a way to export or compile a KNIME workflow as a standalone Java application or JAR? I'd like to run the workflow on a platform where KNIME cannot be installed and/or as part of a larger program to simplify a complex but isolated piece of analytics. My options are many, but installing KNIME on the target platform is not one of them.
The only relevant reference I can find on the KNIME site is this ten-year-old(!) forum question. The only answer there links to this project which does seem to be active and says it is 'a KNIME extension that exports KNIME workflows to different workflow engines', though without digging into its code it's not clear what engines those are.
Other than that, I guess your options are:
ask on the KNIME forum again
since KNIME is open source and is based on Eclipse, look into the more general question of how to build and run a minimal standalone version of Eclipse - there seem to be some relevant-looking answers on here if you search, but I have no further knowledge on how to do it
use scripting nodes in KNIME to develop a text language version of your workflow, verifying as you go that the output adequately matches the KNIME nodes at each step, and deploy the text language version to your target. If you need data mining methods you might want to look at the Weka integration nodes which you could then substitute with calls to Weka methods.

Adding additional vehicles to local instance of graphhopper

Really new to graphhopper and an inexperienced java programmer so I need a little help extending graphhopper core. What I've done so far is install the graphhopper core from the link on the quickstart guide found here. I'm then able to follow the rest of the instructions and get a local instance up and running just fine and interact with it using R and the API instructions. I would like to add support for bike, foot, and transit routing. As per the answer found here, I first attempted to modify graph.flagEncoders=car to graph.flagEncoders=car,foot, bike in the "config-example.properties" file. And that's where things fail. When attempting to run the program in the command prompt, it then fails at this point. Any help and direction would be greatly appreciated.
Edit: 8/24/2018
Below is a screenshot of the command prompt when using graph.flagEncoders=car,foot, bike in the "config-example.properties" file. Note that I'm using the example osm data found on the quickstart guide.
As per Karussell's comment, in this instance all you have to do is delete the *.osm-gh cache folder that's created by Graphhopper.

Has anyone any experience on implementing the R Package XGBoost within the Azure ML Studio environment?

I was hoping that someone would have tried to or had success in implementing it and would have knowledge of any pitfalls in using it.
You need to zip & load the package windows binaries in dataset & import it to the R environment.
You can follow the instructions over here. I couldn't import it for the latest version, so I simply downloaded the xgboost version from this experiment & loaded it to my saved datasets
This is for any generic packages which are not preloaded in the environment
The following is a list of experiments to publish R models as a web service
Hope this helps!
Edit: You can also simply change the R version to Microsoft Open R (current version 3.2.2) and you can import xgboost as any common library
Here you can find an example. It shows for example that you would need to import external libraries individually for both training and scoring.

Is there any OCR SDK for c++ builder?

I'd like to add character recognition functionality to my application that's why asking you what's the best available and affordable OCR SDK . I looked at ABBY FineReader Engine 10.0 but haven't got trial version yet as I requested from the official site!
I've downloaded Asprise OCR SDK but it's doesn't recognize Cyrillic symbols..
How to implement character recognition on my application ? By using what kind of libs, SDKs, APIs and so on..
There's Cunieform and Google's Tesseract OCR, both of which are free. Personally I've used Tesseract, the SDK was giving a lot of trouble so finally decided to simply call the command line interface of Tesseract with arguments from within my C program using the system() function.
Lots of people face difficulties with the Tesseract installation, so here's a short summary (version 2 works for me, insert appropriate version if necessary):
Download the following from the svn: tesseract-2.00.tar.gz, tesseract-2.00.exe6.tar.gz, tesseract-2.00.eng.tar.gz
Unzip tesseract-2.00.tar.gz to a folder
Unzip tesseract-2.00.exe6.tar.gz and move to where tesseract-2.00.tar.gz was unzipped. A few files will be replaced this way
Similarly unzip tesseract-2.00.eng.tar.gz and move to tesseract-2.00.tar.gz where tessdata folder will be replaced.
After all this is done, open the tesseract.dsw workspace, select All Files and do "Rebuild All." This'll take a while with loads of warnings but hopefully no errors.
The command using DOS shell is tesseract picture.tif textfile -l eng. So basically save your image as a TIFF file, run the command from within your program and then read in the OCR output strings from the text file.
I can recommend you Crystal OCR if you don't need to recognize a very complex documents, they sent me C++ Builder sample by request. IMHO, Tesseract is still buggy, though it's the best free OCR of course.
You can try KSAI-Toolkits. It has a completely ocr application, which include C++ API, OCR model, benchmark and test data. And it supports different platforms.