Neo4j - Couldn't load the external resource at: file: - csv

I am using ubuntu 14.04 and trying to import csv file but getting following error - Couldn't load the external resource at: file:/usr/share/neo4j/import/orders.csv
My query is -
USING PERIODIC COMMIT
LOAD CSV WITH HEADERS FROM "file:///orders.csv" AS row
MATCH (order:Order {orderId: row.SalesOrderID})
MATCH (product:Product {productId: row.ProductID})
MERGE (order)-[pu:PRODUCT]->(product)
ON CREATE SET pu.unitPrice = toFloat(row.UnitPrice), pu.quantity = toFloat(row.OrderQty);
I have placed csv files at /var/lib/neo4j/import and also changed permission sudo chmod 777 -R /var/lib/neo4j/import but still not working.
file permissions are as -
sachin#sachin:/var/lib/neo4j$ ls -la
total 28
drwxr-xr-x 7 neo4j adm 4096 Aug 31 10:10 .
drwxr-xr-x 76 root root 4096 Aug 30 19:33 ..
drwxr-xr-x 2 neo4j nogroup 4096 Aug 31 10:10 certificates
drwxr-xr-x 4 neo4j adm 4096 Aug 31 10:10 data
drwxrwxrwx 2 neo4j adm 4096 Aug 31 11:16 import
drwxr-xr-x 2 neo4j nogroup 4096 Aug 31 10:10 .oracle_jre_usage
drwxr-xr-x 2 neo4j adm 4096 Jul 28 09:19 plugins
Please help!!! Thanks.

okay , I've resolved it by creating new directory import under /usr/share/neo4j , place csv files in this directory and set its permission to 777.
Explanation Since my error was Couldn't load the external resource at: file:/usr/share/neo4j/import/orders.csv and I placed my csv files at /var/lib/neo4j/import. Hope it will help others,Thanks.

Related

Separating MySQL Data & Logs in Different Directories

I am not very experienced with databases so I hope my question makes sense.
I am setting up a MySQL/MariaDB instance for Nextcloud on my Ubuntu server and the data will be stored on a ZFS pool.
Several guides/blogs mentions that its best practice to separate data and log files to separate datasets with different properties. So in I add the following to my my.cnf configuration file so that logs and data are stored in different directories and not directly under /var/lib/mysql which is the default.
[mysqld]
datadir = /var/lib/mysql/data
innodb_log_group_home_dir = /var/lib/mysql/log
innodb_data_home_dir = /var/lib/mysql/data
slow_query_log_file = /var/lib/mysql/log/slow.log
log_error = /var/lib/mysql/log/error.log
aria-log-dir-path = /var/lib/mysql/log
In the /var/lib/mysql directory I see that the data/ and and log/ directory has been created and contains some files.
drwxr-xr-x 7 mysql mysql 9 Apr 18 08:03 ./
drwxr-xr-x 8 root root 8 Apr 6 00:10 ../
drwxr-xr-x 2 mysql mysql 5 Apr 18 08:03 data/
drwxr-xr-x 2 mysql mysql 6 Apr 18 08:03 log/
-rw-rw---- 1 mysql mysql 0 Apr 18 08:03 multi-master.info
drwx------ 2 mysql mysql 90 Apr 18 08:03 mysql/
-rw-r--r-- 1 mysql mysql 15 Apr 18 08:03 mysql_upgrade_info
drwx------ 2 mysql mysql 3 Apr 18 08:03 nextcloud/
drwx------ 2 mysql mysql 3 Apr 18 08:03 performance_schema/
However, there are still some additional files and folders (I guess one for each database, they only contain a db.opt file) in the /var/lib/mysql directory.
Are these files/folders not considered neither data nor logs and and if I were to backup/recreate the dababase(s) on a different system would I need to copy anything other than the data/ directory?

how to start mariaDB on boot after external drive is mounted

I am using a raspberry pi 3 with OSMC as the operating system along with Debian Stretch and nginx, and installed manually mariaDB 10.2 following some instructions I found somewhere a while back.
I have changed the datadir for mariadb to /media/USBHDD2/shared/mysql
When I boot, or reboot, the pi, mariaDB fails to start. Before, when I had the default datadir = /var/lib/mysql it was all fine. If I change it back it is fine.
However, if I login to the console I can successfully start it by using
service mysql start
Note that I am using 'service' rather than 'systemctl' - the latter does not work. The files mariadb.service and mysql.service do not exist anywhere.
In /etc/init.d I find two files: mysql and myswql which seem to be identical. If I remove the myswql from the directory mariadb won't start at all. I have tried editing these by putting, for example, a sleep 15 at the beginning, but to no avail. I have read all sorts of solutions about trying to test if the USBHDD2 is mounted, eg using
while ! test -f /media/USBHDD2/shared/test.txt
do
sleep 1
done
which I tried in the /etc/init.d/mysql and myswql files, and also in rc.local before calling for the start of mysql.
But that doesn't work either.
I also renamed the links in rc?.d to S99mysql so is starts after everything else, still no joy.
I have spent two full days on this to no avail. What do I need to do to get this working so that mysql starts on boot?
Files system is ntfs
output from ls -la //media/USBHDD2/shared/mysql is as follows:
total 176481
drwxrwxrwx 1 root root 4096 Mar 27 11:41 .
drwxrwxrwx 1 root root 4096 Mar 27 13:06 ..
-rwxrwxrwx 1 root root 16384 Mar 27 11:41 aria_log.00000001
-rwxrwxrwx 1 root root 52 Mar 27 11:41 aria_log_control
-rwxrwxrwx 1 root root 0 Nov 3 2016 debian-10.1.flag
-rwxrwxrwx 1 root root 12697 Mar 27 11:41 ib_buffer_pool
-rwxrwxrwx 1 root root 50331648 Mar 27 11:41 ib_logfile0
-rwxrwxrwx 1 root root 50331648 Mar 26 22:02 ib_logfile1
-rwxrwxrwx 1 root root 79691776 Mar 27 11:41 ibdata1
drwxrwxrwx 1 root root 32768 Mar 25 18:37 montegov_admin
-rwxrwxrwx 1 root root 0 Nov 3 2016 multi-master.info
drwxrwxrwx 1 root root 20480 Sep 3 2019 mysql
drwxrwxrwx 1 root root 0 Sep 3 2019 performance_schema
drwxrwxrwx 1 root root 86016 Mar 25 20:06 rentmaxpro_wp187
drwxrwxrwx 1 root root 0 Sep 3 2019 test
drwxrwxrwx 1 root root 32768 Nov 3 2016 trustedhomerenta_admin
drwxrwxrwx 1 root root 32768 Nov 3 2016 trustedhomerenta_demo
drwxrwxrwx 1 root root 40960 Mar 25 21:05 trustedhomerenta_meta
drwxrwxrwx 1 root root 36864 Mar 25 21:25 trustedhomerenta_montego
drwxrwxrwx 1 root root 36864 Mar 26 20:37 trustedhomerenta_testmontego
The problem is that the external drive is configured as ntfs.
Mysql requires the files and directory to be owned by mysql:mysql but since the ntfs does not have the same system of owners and groups as linux, the linux mount process asigns its own owner and group to the filestructure when mounting the drive. By defualt this ends up being root:root so mysql cannot use them.
ntfs does not allow CHOWN to work, so there is no way to change the ownership away from root.
One solution is to backup all the files, repartition as EXT4, and then restore all the files.
The solution I finally used was to specify mysql as the owner and group at the time that the drive is being mounted. Thus my /etc/fstab file was changed to:
ID=C2CA68D9CA68CB6D /media/USBHDD2 ntfs users,exec,uid=mysql,gid=mysql 0 2
and now mysql starts properly at boot.
phew ;-)
Thanks #danblack for getting me thinking in the right direction

gulp chown doesn't change owner

I am trying to make a build process, sort of, and it seems like gulp-chown doesn't give me the correct results.
This is what I run:
gulp.task('clientDeploy', function () {
return gulp.src('client/dist/**/*')
.pipe(chown('rf', 'rfids'))
.pipe(gulp.dest('/var/www/html/dashboard'));
});
The gulp script runs as root, obviously.
The result is this:
drwxr-xr-x 2 root root 4.0K Jun 29 12:57 css/
drwxr-xr-x 2 root root 4.0K Jun 29 12:57 fonts/
drwxr-xr-x 2 root root 4.0K Jun 29 12:57 icons/
drwxr-xr-x 3 root root 4.0K Jun 29 12:57 images/
drwxr-xr-x 2 root root 4.0K Jun 29 12:57 js/
-rw-rw-r-- 1 root root 8.3K Jun 29 13:15 events-panel.html
-rw-r--r-- 1 root root 20K Jun 29 13:15 index.html
-rw-rw-r-- 1 root root 8.2K Jun 29 13:15 main-panel.html
I've read here on GitHub that the problem might be with gulp.dest() that doesn't read the file's metadata, and uses the user that runs the command.
Has anyone ever come across this and solved it?
It is a bug in vinyl-fs. When it writes a file to disk, it honors the file.stat.mode (where file is a vinyl File object) but it completely ignores the values of file.stat.uid and file.stat.gid.
It looks like it has been fixed in the code base of vinyl-fs but AFAIK there is no release yet that contains the fix.
Someone in the bug report you linked to mentions having cloned gulp-chown but I don't think that's necessary. I just change the file ownership after gulp.dest has done its work:
import gulp from "gulp";
import es from "event-stream";
import fs from "fs";
gulp.task("build",
() => gulp.src("src/**/*")
.pipe(gulp.dest("build"))
.pipe(es.map((file, callback) => {
fs.chown(file.path, file.stat.uid, file.stat.gid,
(err) => callback(err, file));
})));
You see I'm just using file.stat.uid and file.stat.gid. My goal is to preserve the ownership that the source file has. (Yeah, because vinyl-fs does not even do this by default.) You can just put any uid and gid you want there.

Docker: correct way of persisting container data to host

I'm using OSX and running docker over a Boot2docker VM.
I've been trying to figure out how to persist a container's data (MySQL official docker image) to the host but without much success.
I keep receiving an error stating that the /var/lib/mysql directory that the MySQL service is trying to write to is not accessible.
docker run -e MYSQL_ROOT_PASSWORD=12345 -v "$(pwd)/.docker-volumes/mysql:/var/lib/mysql" mysql:5.6
Looking at the permissions of the mounted library in the container, this is what I see:
root#mysql:/# ls -la /var/lib/
total 44
drwxr-xr-x 16 root root 4096 Jan 27 18:35 .
drwxr-xr-x 18 root root 4096 Jan 27 18:35 ..
drwxr-xr-x 7 root root 4096 Jan 27 18:35 apt
drwxr-xr-x 14 root root 4096 Jan 27 18:35 dpkg
drwxr-xr-x 2 root root 4096 Jul 14 2013 initscripts
drwxr-xr-x 2 root root 4096 Jul 14 2013 insserv
drwxrwsr-x 2 libuuid libuuid 4096 Dec 11 2012 libuuid
drwxr-xr-x 2 root root 4096 Dec 24 13:41 misc
drwxr-xr-x 1 1000 staff 102 Feb 4 15:10 mysql
drwxr-xr-x 2 root root 4096 Jan 27 16:48 pam
drwxr-xr-x 2 root root 4096 Nov 23 2012 update-rc.d
drwxr-xr-x 2 root root 4096 Jul 14 2013 urandom
As you can see, the mysql directory is owned by 1000 and belongs to the group 'staff'.
My assumption is that the service process running MySQL is probably set to another user (mysql) and therefore I get this error.
I've read that this specific issue can be solved using volume data containers, but since they persist the data only until the last container actually uses their volume, It's not a good solution for me.
Am I approaching this in the wrong way?
Thanks.
You're definitely better off using a data-volume container, I do the same thing with local psql and couchdb databases. The data actually persists, it's just not accessible unless you link the volume to a container. To actually force the volume to be removed you have to specify docker rm -v, that will remove the data volume if no other container is linked to it.

Load CSV Fails in Cypher + Neo4j "LoadExternalResourceException: Couldn't load the external resource at:"

I have a fresh install of Neo4j 2.1.4 open source on a corporate cloud server running Ubuntu 14.04. I am importing a CSV file into the database. The path to my file is '/home/username/data-neo4j/node.csv'
Below is my command, which I run from the Neo4j command line tool neo4j-shell:
LOAD CSV WITH HEADERS FROM "file:///home/username/data-neo4j/node.csv" AS line CREATE (:Node { nid: toInt(line.nid), title: line.title, type: line.type, url: line.url});
This returns:
LoadExternalResourceException: Couldn't load the external resource at: file:/home/user/data-neo4j/node.csv
This looks like a message saying it can't find the file. However, the file is in place. I even tried changing the permissions on the file to be 755.
I have a separate instance of Neo4j on my local machine (OSX with Neo4j 2.1.2 Enterprise). The command is successful on my local machine, given that I switch the path to match.
One thing I notice when I run neo4j-shell, I get NOTE: Remote Neo4j graph database service 'shell' at port 1337. I have opened this port and my command still returns the same error message.
I also read through this link - but their problem was that they had not uploaded their file. My file is in place.
neo4j LOAD CSV returns Couldn't Load external resource
sheldonkreger, your co-worker is right. Thanks to him.
I solved it doing the same, but you actually don't need to place the file in a location where neo4j user has permissions, as suggested by him, for example /var/log/neo4j or /var/lib/neo4j.
Instead, just go to the neo4j directories mentioned above and see the file permissions over there, and provide the same permissions to your csv file or whichever file you are trying to import.
For example, for my system the file permissions in neo4j folder was like this:
ls -la
total 208
drwxr-xr-x 4 neo4j adm 4096 Feb 4 10:35 .
drwxr-xr-x 87 root root 4096 Feb 11 22:21 ..
drwxr-xr-x 3 neo4j adm 4096 Feb 4 10:35 bin
-rw-r--r-- 1 neo4j adm 61164 Jan 29 22:32 CHANGES.txt
lrwxrwxrwx 1 neo4j adm 10 Sep 30 12:07 conf -> /etc/neo4j
drwxr-xr-x 4 neo4j adm 4096 Mar 13 13:25 data
lrwxrwxrwx 1 neo4j adm 20 Sep 30 12:07 lib -> /usr/share/neo4j/lib
-rw-r--r-- 1 neo4j adm 125517 Jan 29 22:32 LICENSES.txt
lrwxrwxrwx 1 neo4j adm 24 Sep 30 12:07 plugins -> /usr/share/neo4j/plugins
-rw-r--r-- 1 neo4j adm 1568 Jan 29 22:32 README.txt
lrwxrwxrwx 1 neo4j adm 23 Sep 30 12:07 system -> /usr/share/neo4j/system
-rw-r--r-- 1 neo4j adm 4018 Jan 29 22:30 UPGRADE.txt
So I did the same to my file and neo4j was successfully able to run the import command.
I did this:
sudo chown neo4j:adm <csv file location>
A co-worker helped me debug this.
The problem was a permission. In Linux, neo4j has its own user 'neo4j'. That user did not have permissions to access the data at /home/myuser/data-neo4j/node.csv
We moved the data to a folder where the neo4j user has permissions and adjusted the path in the query.
For future reference, the log for Neo4j can provide additional info, an in Linux, is found at /var/log/neo4j