I'm importing csv using
USING PERIODIC COMMIT
LOAD CSV WITH HEADERS FROM "file:///root/rahul/Neo4jData/after_etl_data.csv" AS row
CREATE (:AFTER_ETL_DATA {user_tweet_id:row.user_tweet_id});
I'm getting the following error
QueryExecutionKernelException: Couldn't load the external resource at:
file:/root/rahul/Neo4jData/after_etl_data.csv
Things I have done
1) Changed the permission of files to
777
2) Chnaged the owner of the file
ie -rwxrwxrwx 1 neo4j adm 553942876 Sep 12 13:54 after_etl_data.csv
3) Added the line
dbms.security.allow_csv_import_from_file_urls=true
in /etc/neo4j/neo4j-server.properties
I'm using neo4j 2.2.5
I dont know how to solve this.
Note - However if I start my shell using
./neo4j-shell -path graph.db -config
/var/lib/neo4j/conf/neo4j.properties
I'm able to insert the data , but since it starts in local mode , I'm unable to view the data in Neo4j UI Interface.
Related
Im attempting to source a file 2019Dump.sql within another file solution2.sql.
Use a backup file 2019Dump.sql to load the pre 01 January 2020 contents of
relational tables LOCATED, SERVES and ORDERS into the database abc123.
I am running the solution2.sql through terminal using source solution2.sql.
when I run solution2.sql it is meant to source and run 2019Dump.sql.
I have tried 2 methods that (I have found and both error (42000)
source 2019Dump.sql;
mysql -u root -p abc123 < 2019Dump.sql;
I working on Flume to append the data from a local directory to HDFS using Flume Source TAILDIR.
My use case is to do Delta Load If the new line comes in the source file in local dir so that will append in hdfs.
This is my Flume Conf file :
#configure the agent
agent.sources=r1
agent.channels=k1
agent.sinks=c1
agent.sources.r1.type=TAILDIR
agent.sources.r1.positionFile = /home/flume/Documents/taildir_position.json
agent.sources.r1.filegroups=f1
agent.sources.r1.filegroups.f1=/home/flume/Documents/spooldir/
agent.sources.r1.batchSize = 20
agent.sources.r1.writePosInterval=2000
agent.sources.r1.maxBackoffSleep=5000
agent.sources.r1.fileHeader = true
agent.sources.r1.channels=k1
agent.channels.k1.type=memory
agent.channels.k1.capacity=10000
agent.channels.k1.transactionCapacity=1000
agent.sinks.c1.type=hdfs
agent.sinks.c1.channel=k1
agent.sinks.c1.hdfs.path=hdfs://localhost:8020/flume_sink
agent.sinks.c1.hdfs.batchSize = 1000
agent.sinks.c1.hdfs.rollSize = 268435456
agent.sinks.c1.hdfs.writeFormat=Text
while running flume command : flume-ng agent -n agent -c conf -f /home/swechchha/Documents/flumereal.conf
I am getting error
I am getting error to load JSON file.
Here is the code. It crashes at the line 110. Please make sure that flume user has access to that JSON file and that the file is correctly formatted.
The Flume.conf mentioned in Question Statement is having a problem.
TAILDIR SOURCE: Watch the specified files, and tail them in nearly real-time once detected new lines appended to each files. If the new lines are being written, this source will retry reading them in wait for the completion of the write.
While writing filegroups property directory may contain multiple files in this case it should be mentioned like directory path/ .filestype.
agent.sources.r1.filegroups.f1=/home/flume/Documents/spooldir/.*txt.*
Then run flume.conf and check the result it will work fine.
I have a file on S3 called data.csv.gz It is a gzipped CSV. I've successfully ungzipped it with the ungzip command, so I know it's gzipped correctly as far as I can tell.
Running the following command gives an error
COPY to_table ("id", "something", "something_else")
FROM 's3://my.domain.com/somewhere/data.csv.gz'
CREDENTIALS 'aws_access_key_id=********;aws_secret_access_key=********'
IGNOREHEADER 1 TRUNCATECOLUMNS CSV REGION 'us-east-1' GZIP;
The error is:
-----------------------------------------------
error: Failed writing body (0 != 575) Cause: Failed to inflateinvalid or incomplete deflate data. zlib error code: -3
code: 9001
context: S3 key being read : ...
...
-----------------------------------------------
What does this mean and what can be done to fix it?
The file is SSE-S3 encrypted, if that matters - which from what I can tell, it shouldn't.
This happens when you use gzip option during copy but the file cannot be read as gzip.
I was facing the same issue, i deleted existing file folder in s3 and re-ran unload script and then copy script.
in my case, the gz file is intact. It is a legal gzip file. I doublechecked it using file -i and gzip -v -t cmds.
i even unzipped the file, re-zipped it and uploaded it to s3. but still having the same error.
then i found out that the last row in this gz file got "corrupted". it was cut in half somehow, i had to delete this row, re-zip it, and upload it to s3.
everything is ok after that
however the mystery is we should have gotten the stl_load_errors if a row is cut in half like this ¯\_(ツ)_/¯
my only guess is that our original gzip file was still corrupted but undetectable using those 2 cmds and i was still able to unzip it.
I am trying to convert VMX to OVF format using OVFTool as below, however it gives error:
C:\Program Files\VMware\VMware OVF Tool>ovftool.exe
vi://vcenter.com:port/folder/myfolder/abc.vmx abc.ovf
Error: Failed to open file: https://vcenter.com:port/folder/myfolder/abc.vmx
Completed with errors
Please let me know if you have any solution.
I had a similar situation in vmware fusion trying to use a .vmx that was probably created on windows. I could boot the VM, but any attempt to export the machine with ovftool or use vmware-vdiskmanager bombed out with:
Error: Failed to open disk: source.vmdk
Completed with errors
the diskname was totally valid, path was valid, permissions were valid, and the only clue was running ovftool with:
ovftool --X:logToConsole --X:logLevel=verbose source.vmx dest.ova
Opening VMX source: source.vmx
verbose -[10C2513C0] Opening source
verbose -[10C2513C0] Failed to open disk: ./source.vmdk
verbose -[10C2513C0] Exception: Failed to open disk: source.vmdk. Reason: Disk encoding error
Error: Failed to open disk: source.vmdk
as others suggested, i took a peek in the .vmdk. therein i found 3 other clues:
encoding="windows-1252"
createType="monolithicSparse"
# Extent description
RW 16777216 SPARSE "source.vmdk"
so first i converted the monolithicSparse vmdk to "preallocated virtual disk split in 2GB files":
vmware-vdiskmanager -r source.vmdk -t3 foo.vmdk
then i could edit the "foo.vmdk" to change the encoding, which now looks like:
encoding="utf-8"
createType="twoGbMaxExtentFlat"
# Extent description
RW 8323072 FLAT "foo-f001.vmdk" 0
RW 8323072 FLAT "foo-f002.vmdk" 0
RW 131072 FLAT "foo-f003.vmdk" 0
and finally, after fixing up the source.vmx:
scsi0:0.fileName = "foo.vmdk"
profit:
ovftool source.vmx dest.ova
...
Opening VMX source: source.vmx
Opening OVA target: dest.ova
Writing OVA package: dest.ova
Transfer Completed
Completed successfully
I had a similar problem with OVFTool trying to export to OVF format.
Export failed: Failed to open file: C:\Virtual\test\test.vmx.
First, I opened .VMX file in editor (it's a text file) and made sure that settings like
scsi0:0.fileName = "test.vmdk"
nvram = "test.nvram"
extendedConfigFile = "test.vmxf"
mention proper file names.
Then I noticed this line:
.encoding = "windows-1251"
This is Cyrillic code page, so I modified it to use Western code page
.encoding = "windows-1252"
Then, running OVFTool gave a different error
Export failed: Failed to open disk: test.vmdk.
To fix it I had to open .VMDK file in HEX editor (because it's usually a big binary file), found there the string
encoding = "windows-1251"
(it's somewhere in the beginning of the file), and replaced "1251" with "1252".
And it did the trick!
In my case, was needed repair the disk 'abc.vmdk' before convert the 'abc.vmx' to 'abc.ovf'.
Use this for Linux:
$ /usr/bin/vmware-vdiskmanager -R /home/user/VMware/abc.vmdk
Look this link https://kb.vmware.com/s/article/2019259 for resolved issue in Windows and Linux
Try to run as described below.
C:\Program Files\VMware\VMware OVF Tool>ovftool C:\Win-Test\Win-Test.vmx(location of your vmx file) C:\Win-Test\win-test.ovf (destination)
Maybe ovftool is unable to recognize the path you are giving.
Try with following command:
ovftool --eula#=[path to eula] --X:logToConsole --targetType=OVA --compress=9 vi://[username]:[ESX address] [target address]
Once you provide the ESX address, it will list down the folders you have created in your ESX box. Then you can trigger the command above mentioned again with appending folder name.
If no folder hierarchy present in your box, then it will simply list down vm names.
Retry the same command appending [foldername]/[vmname no vmx file name required]
ovftool --eula#=[path to eula] --X:logToConsole --targetType=OVA --compress=9 vi://[username]:[ESX address]/[foldername if exist]/[vmname no vmx file name required] [target address]
I had this same exact issue. In my case I opened up the VMX file and dropped the IDE and sound controllers from the file and saved. I was then able to convert everything to an OVA using the tool with the standard syntax.
e.g. I dropped:
ide1:0.present = "TRUE"
ide1:0.deviceType = "cdrom-image"
and:
sound.present = "TRUE"
sound.fileName = "-1"
sound.autodetect = "TRUE"
This allowed me to convert the file like normal.
For me opening the .vmx and deleting the following line worked:
sata0:1.deviceType = "cdrom-image"
In my case, this works:
ide1:0.present = "TRUE"
ide1:0.deviceType = "cdrom-image"
I did change true to false and works fine, as cdrom-image not exist, this change permit the format conversion.
if your goal is to move a windows based vm to virtual box you only need to:
uninstall vmware tools from the guest vm
shut down the machine
copy the hd to a new folder
create a new empty vm in virtualbox
mount the hd (the .vmdk file) in that vm
Easy and rapid to do.
I am trying to run the standalone ncbi-blast-2.2.28+ on my machine (Mac) but get this error message when running blastp with SwissProt database:
BLAST Database error: Could not find volume or alias file (nr.00) referenced in alias file (/Users/me/bin/db/swissprot.00).
Here what I did:
1) downloaded the "ncbi-blast-2.2.28+-universal-macosx.tar.gz" from ncbi server and decompressed it
2) move the bin content of the folder to my $PATH directory "/Users/me/bin"
3) In "/Users/me/bin" I created a "db" folder, plus the ".ncbirc" file containing the following path:
[BLAST]
BLASTDB=/Users/me/bin/db
4) I downloaded the SwissProt database and got the following files in "/Users/me/bin/db/:
swissprot.00.msk
swissprot.01.msk
swissprot.02.msk
swissprot.03.msk
swissprot.04.msk
swissprot.05.msk
swissprot.06.msk
swissprot.07.msk
swissprot.08.msk
swissprot.09.msk
swissprot.10.msk
swissprot.00.pal
swissprot.01.pal
swissprot.02.pal
swissprot.03.pal
swissprot.04.pal
swissprot.05.pal
swissprot.06.pal
swissprot.07.pal
swissprot.08.pal
swissprot.09.pal
swissprot.10.pal
swissprot.pal
Then when I run blastp from any working directory (where my query file is), using this command:
blastp -query input.fasta -db swissprot
I get the following error message:
BLAST Database error: Could not find volume or alias file (nr.00) referenced in alias file (/Users/me/bin/db/swissprot.00).
As I read on other threads, I also tried to mention in the command line the whole path where the db is located, and to remove the .pal extension from the file names. But still doesn't work.
Can someone sees what I did wrong ?!!!!
you are storing your database files in db folder so you have to give this command instead of the one you have used:
blastp -query input.fasta -db db/swissprot
and I believe you are looking for an output in the console itself as you haven't used the -out option.
Also this will work only if the bin directory in which db folder is present be declared as an environment variable.
Have you checked the paths in .pal file?
Swissprot database that you have downloaded contains only links to entries in nr database: "nr - Non-redundant GenBank CDS translations + PDB + SwissProt + PIR + PRF, excluding those in env_nr". So you should additionally download nr database to run the standalone blast on your machine with SwissProt database. It weighs about 20 (!) Gb, but without it your blast will not work. Here's a link: ftp://ftp.ncbi.nlm.nih.gov/blast/db/
place all files from 00 to 10 folders into db and then check .pal file should contain 00 to 10 parts for example for nr databas its like
"nr.00" "nr.01" "nr.02" "nr.03" "nr.04" "nr.05" "nr.06" "nr.07" "nr.08" "nr.09" "nr.10"