Temporarily disable cuDNN in Caffe - caffe

Is it possible to temporarily disable cuDNN in Caffe without having to set engine: Caffe on every single layer in my train_val file? We want to get a deterministic result for refactoring purposes but continue using cuDNN when we're not doing that.

Related

How to know a program is being executed on GPU or CPU on Ubuntu?

I am using Ubuntu 16.04, and running my program to train a DeepLearning model. The epoch number seems to be big but the program is too slow. I wanna make sure my program is running on GPU?
How could I know that?
Thanks!
I assume you are using an Nvidia GPU.
The simplest way to achieve this is to type the following command inside a terminal:
watch -n 1 nvidia-smi
This will give you continuous update (every second) without filling the terminal with the output.
You will be able to check which processes are making use of your GPU, the allocated memory, the temperature and used power, etc.

What file system does MySQL use?

Does MySQL use fread, read, mmap, or another file system when saving database data to the disk on a Linux OS? Or is MySQL doing a test to see which one to use? This is not in reference to saving config data. I'm interested in the actual database, preferably InnoDB.
Thanks for any help.
Edit: To be more specific, I'm interested in the c/c++ source code in MySQL that does the actual calls that saves data to a InnoDB database. Possible options are fread, read, mmap, among others.
What file system does MySQL use?
The MySQL access method code (InnoDB, MyISAM, AriaDB and the rest) uses the native file system of the host volume on the host operating system. NTFS on Windows, ext4fs on U**X systems, etc. The competent platform ports use a variety of I/O techniques including memory mapping, scatter/gather and ordinary read and write system calls, and integrate with the file systems' journaling features. The exact techniques used depend on the kind of query, the access method, and the state of caches.
Pro tip: Don't worry about this for performance reasons unless your server is running on an old 32-bit 486 machine you found in a storeroom (or unless you have millions of users and billions of rows of data).
On Linux systems all POSIX fileystems will work. fread is a libc construct that will translate to underlying syscalls like read, mmap, write etc.
The read, mmap, write operations are implemented in a Linux VFS (virtual file system) layer before those map to specific operations in the filesystem code. So any POSIX filesystem will work with MySQL.
The only filesystem test I've seen in the MySQL code is a fallocate syscall which isn't implemented on all filesystems (especially when it was first added, its probably significantly available now). There is an implementation workaround when fallocate isn't available.

Google openrefine don't load big csv-file

When i try to create project, i load csv file with 3,5 millions rows(400mb)
and refine doesn't upload it.
it indicates 100% 1037 mb
i opened refine.ini and fixed memory limit, but there is no result
NOTE: This file is not read if you run the Refine executable directly
# It is only read of you use the refine shell script or refine.bat
no_proxy="localhost,127.0.0.1"
#REFINE_PORT=3334
#REFINE_HOST=127.0.0.1
#REFINE_WEBAPP=main\webapp
# Memory and max form size allocations
#REFINE_MAX_FORM_CONTENT_SIZE=104857600
REFINE_MEMORY=100000M
# Set initial java heap space (default: 256M) for better performance with large datasets
REFINE_MIN_MEMORY=100000M
# Some sample configurations. These have no defaults.
#ANT_HOME=C:\grefine\tools\apache-ant-1.8.1
#JAVA_HOME=C:\Program Files\Java\jdk1.6.0_25
#JAVA_OPTIONS=-XX:+UseParallelGC -verbose:gc -Drefine.headless=true
#JAVA_OPTIONS=-Drefine.data_dir=C:\Users\user\AppData\Roaming\OpenRefine
# Uncomment to increase autosave period to 60 mins (default: 5 minutes) for better performance of long-lasting transformations
#REFINE_AUTOSAVE_PERIOD=60
What i should do?
Based on the testing I did and published at https://groups.google.com/d/msg/openrefine/-loChQe4CNg/eroRAq9_BwAJ, to process 3.5 million rows you probably need to allocate around 8Gb RAM to have a reasonably responsive project.
As documented in OpenRefine changing the port and host when executable is run directly, when running OpenRefine on Windows where you set the options depends on whether you are starting OpenRefine via the exe file or the bat file.
To allocate over 4Gb of RAM, you definitely need to be using a 64-bit Java version - please check what version of Java OpenRefine is running in (it will use the Java specified in JAVA_HOME). However, you may find issues allocating 4Gb on 32-bit Java on a 64-Bit OS (see Maximum Java heap size of a 32-bit JVM on a 64-bit OS)

Mysql slow on windows, fast on linux. Why?

I have installed a SpringMVC Web application with JPA and a Mysql Database.
The application is displaying statistics from the database (with a lot of selects)
It works quite fast on Linux(mysql 5.5.54), but it is very slow on Windows 10 (mysql 5.6.38).
Do you know what could cause such a behaviour on Windows?
Or could you give me hints or tell me where to search?
[UPDATE]
Linux : Intel® Core™ i7-4510U CPU # 2.00GHz × 4 / 8GoRAM
Windows : Intel Xeon CPU E31220 3.1Ghz 4GoRAM
I know that the windows machine is not as "powerful" than the linux one. I wonder if, by increasing the memory, that could be enough. Or does Mysql needs a lot of CPU too.
My list would be:
Check configs are identical - not just the settings in my.ini - values not set here are set at compile time and the 2 instances have definitely been compiled seperately! You'll need to capture and compare the output of SHOW VARIABLES
Check file deployment is similar - whether innodb is configured to use one file per table, whether the files are distributed across multiple disks
Check adequate memory available for caching on MSWindows
disable anti-virus
Make sure MSWindows is configured as a server (prioritize background tasks)
Windows sucks, deal with it :)

How to start a spark-shell using all snappydata cluster servers?

I can't seem to find a way to start a shell using all the servers set up in conf/servers
Only found it possible to submit to cluster jobs using /bin/snappy-job.sh where I specify the lead location, but would like to try real time shell to perform some tests using the whole cluster
Thank you,
Saif
Please see this link. It tells how to start a spark-shell and connect it to snappy store.
http://snappydatainc.github.io/snappydata/connectingToCluster/#using-the-spark-shell-and-spark-submit
Essentially you need to provide the locator property and this locator is the same which you have used to start the snappy cluster.
$ bin/spark-shell --master local[*] --conf snappydata.store.locators=locatorhost:port --conf spark.ui.port=4041
Note that with the above a different compute cluster is created to run your program. The snappy cluster is not used for computation when you run your code from this shell. The required table definition and data is fetched in efficient fashion from the snappy store.
In future we might make this shell connect to the snappy cluster in such a way that it uses the snappy cluster itself as its compute cluster.