Using this ffprobe blackdetect cmd
ffprobe "./somefile.m2v" "blackdetect=d=0.1:pix_th=0.00"
I'm getting this error:
Argument 'blackdetect=d=0.1:pix_th=0.00' provided as input filename, but './somefile.m2v' was already specified.
I am able to return a simple ffprobe information on this same file. Why am I getting this error?
Related
I'm trying to get a simple csv file from a url in Julia using Downloads and CSV without success.
This is what I've done so far:
using Downloads, CSV
url = "https://r-data.pmagunia.com/system/files/datasets/dataset-85141.csv"
f = Downloads.download(url)
df = CSV.read(f, DataFrame)
But I get the following error: ArgumentError: Symbol name may not contain \0
I've tried using normalizenames, but also without success:
f = Downloads.download(url)
df = CSV.File(f, normalizenames=true)
But then I get Invalid UTF-8 string as an error message.
When I simply download the file and get it from my PC with CSV.read I get no errors.
The server is serving that file with Content-Encoding: gzip, i.e. the data that is transferred is compressed and the client is expected to decompress it. You can try this out yourself on the command line, curl does not decompress by default:
$ curl https://r-data.pmagunia.com/system/files/datasets/dataset-85141.csv [9:40:49]
Warning: Binary output can mess up your terminal. Use "--output -" to tell
Warning: curl to output it to your terminal anyway, or consider "--output
Warning: <FILE>" to save to a file.
however if you pass the --compressed flag:
$ curl --compressed https://r-data.pmagunia.com/system/files/datasets/dataset-85141.csv
"time","Nile"
1871,1120
1872,1160
1873,963
[...]
Downloads.jl uses libcurl and I can't find much mention of handling of compressed content in the Downloads.jl repository.
To fix this for now you can upgrade to v0.9.4 of CSV.jl, it handles gzipped CSV-files transparently.
If updating is not an option you can use CodecZlib.jl manually:
using Downloads, CSV, DataFrames, CodecZlib
url = "https://r-data.pmagunia.com/system/files/datasets/dataset-85141.csv"
f = Downloads.download(url)
df = open(fh -> CSV.read(GzipDecompressorStream(fh), DataFrame), f)
I tried converting my .csv file to .dat format and tried to load the file into Octave. It throws an error:
unable to find file filename
I also tried to load the file in .csv format using the syntax
x = csvread(filename)
and it throws the error:
'filename' undefined near line 1 column 13.
I also tried loading the file by opening it on the editor and I tried loading it and now it shows me
warning: load: 'filepath' found by searching load path
error: load: unable to determine file format of 'Salary_Data.dat'.
How can I load my data?
>> load Salary_Data.dat
error: load: unable to find file Salary_Data.dat
>> Salary_Data
error: 'Salary_Data' undefined near line 1 column 1
>> Salary_Data
error: 'Salary_Data' undefined near line 1 column 1
>> Salary_Data
error: 'Salary_Data' undefined near line 1 column 1
>> x = csvread(Salary_Data)
error: 'Salary_Data' undefined near line 1 column 13
>> x = csvread(Salary_Data.csv)
error: 'Salary_Data' undefined near line 1 column 13
>> load Salary_Data.dat
warning: load: 'C:/Users/vaith/Desktop\Salary_Data.dat' found by searching load path
error: load: unable to determine file format of 'Salary_Data.dat'
>> load Salary_Data.csv
warning: load: 'C:/Users/vaith/Desktop\Salary_Data.csv' found by searching load path
error: load: unable to determine file format of 'Salary_Data.csv'
Salary_Data.csv
YearsExperience,Salary
1.1,39343.00
1.3,46205.00
1.5,37731.00
2.0,43525.00
2.2,39891.00
2.9,56642.00
3.0,60150.00
3.2,54445.00
3.2,64445.00
3.7,57189.00
3.9,63218.00
4.0,55794.00
4.0,56957.00
4.1,57081.00
4.5,61111.00
4.9,67938.00
5.1,66029.00
5.3,83088.00
5.9,81363.00
6.0,93940.00
6.8,91738.00
7.1,98273.00
7.9,101302.00
8.2,113812.00
8.7,109431.00
9.0,105582.00
9.5,116969.00
9.6,112635.00
10.3,122391.00
10.5,121872.00
Ok, you've stumbled through a whole pile of issues here.
It would help if you didn't give us error messages without the commands that produced them.
The first message means you were telling Octave to open something called filename and it couldn't find anything called filename. Did you define the variable filename? Your second command and the error message suggests you didn't.
Do you know what Octave's working directory is? Is it the same as where the file is located? From the response to your load commands, I'd guess not. The file is located at C:/Users/vaith/Desktop. Octave's working directory is probably somewhere else.
(Try the pwd command and see what it tells you. Use the file browser or the cd command to navigate to the same location as the file. help pwd and help cd commands would also provide useful information.)
The load command, used as a command (load file.txt) can take an input that is or isn't defined as a string. A function format (load('file.txt') or csvread('file.txt')) must be a string input, hence the quotes around file.txt. So all of your csvread input commands thought you were giving it variable names, not filenames.
Last, the fact that load couldn't read your data isn't overly surprising. Octave is trying to guess what kind of file it is and how to load it. I assume you tried help load to see what the different command options are? You can give it different options to help Octave figure it out. If it actually is a csv file though, and is all numbers not text, then csvread might still be your best option if you use it correctly. help csvread would be good information for you.
It looks from your data like you have a header line that is probably confusing the load command. For data that simply formatted, the csvread command can bring in the data. It will replace your header text with zeros.
So, first, navigate to the location of the file:
>> cd C:/Users/vaith/Desktop
then open the file:
>> mydata = csvread('Salary_Data.csv')
mydata =
0.00000 0.00000
1.10000 39343.00000
1.30000 46205.00000
1.50000 37731.00000
2.00000 43525.00000
...
If you plan to reuse the filename, you can assign it to a variable, then open the file:
>> myfile = 'Salary_Data.csv'
myfile = Salary_Data.csv
>> mydata = csvread(myfile)
mydata =
0.00000 0.00000
1.10000 39343.00000
1.30000 46205.00000
1.50000 37731.00000
2.00000 43525.00000
...
Notice how the filename is stored and used as a string with quotation marks, but the variable name is not. Also, csvread converted non-numeric header data to 'zeros'. The help for csvread and dlmread show you how to change it to something other than zero, or to skip a certain number of rows. If you want to preserve the text, you'll have to use some other input function.
I am trying to transfer data from a JSON file produced by the Google Maps API onto my PostgreSQL database. This is done through cURL and I made sure that the permissions have been correctly set.
The url:
https://maps.googleapis.com/maps/api/distancematrix/json?units=imperial&origins=London&destinations=Paris&key=AIza-[key-redacted]-3z6ho-o
The query:
copy bookings.import(info) from program 'C:/temp/mycurl/curl "https://maps.googleapis.com/maps/api/distancematrix/json?units=imperial&origins=London&destinations=Paris&key=AIzaSyBIhOMI68hTIFarH4jrb_eKUmvY3z6ho-o" --insecure'
However, when I try to do this on my table with column 'info' of type 'json', I get the following error:
ERROR: invalid input syntax for type json DETAIL: The input string
ended unexpectedly. CONTEXT: JSON data, line 1: { COPY import, line
1, column info: "{"
********** Error **********
ERROR: invalid input syntax for type json SQL state: 22P02 Detail: The
input string ended unexpectedly. Context: JSON data, line 1: { COPY
import, line 1, column info: "{"
I am trying to not include things such as PHP or any other tool currently, yet if the only option is that I would certainly consider it.
What exactly do you guys think I am doing wrong? Is it the syntax, the format or am I missing something?
Thanks!
COPY assumes that each newline indicates a new record. Unfortunately, the Google Maps DistanceMatrix API is pretty-printing your response which means that it comes through as 23 rows, none of which are valid JSON.
You can get around this by piping the curl response through something like jq.
copy imports(info) from program 'curl "https://maps.googleapis.com/maps/api/distancematrix/json?units=imperial&origins=London&destinations=Paris&key=<my_key>" --insecure | /usr/local/bin/jq "." -c'
jq has lots of useful features if you want to massage the response a bit more before stashing it in the database.
Am I correct that one cannot use cypher to load a csv file with header together with datatype?
(By "with datatype", I mean header with something like this:
For entities:
orderId:ID(Order) customerId:IGNORE
For relationships:
:START_ID(Order) :END_ID(Product)
)
According to this two websites: https://neo4j.com/developer/guide-import-csv/, http://jexp.de/blog/2015/04/how-to-neo4j-data-import-minimal-example/
It seems that I could import data together with header in this way in either powershell or command prompt (I am using a windows computer):
path\to\neo4j-community-3.1.1\bin\neo4j-import --into graph.db \
--nodes:Person C:\SavedNewest\people_header.csv, C:\SavedNewest\people.csv \
--relationships:KNOWS C:\SavedNewest\friendships_header.csv,C:\SavedNewest\friendships.csv
(The csv are reconstructed according to this website: http://jexp.de/blog/2015/04/how-to-neo4j-data-import-minimal-example/)
Error from PowerShell:
At line:2 char:3
+ --nodes:Person C:\SavedNewest\people_header.csv,https://gist.githubus ...
+ ~
Missing expression after unary operator '--'.
At line:2 char:3
+ --nodes:Person C:\SavedNewest\people_header.csv,https://gist.githubus ...
+ ~~~~~~~~~~~~
Unexpected token 'nodes:Person' in expression or statement.
+ CategoryInfo : ParserError: (:) [], ParentContainsErrorRecordException
+ FullyQualifiedErrorId : MissingExpressionAfterOperator
Error from Command prompt:
WARNING: This command does not appear to be running with administrative rights.
Some commands may fail e.g. Start/Stop
WARNING: neo4j-import is deprecated and support for it will be removed in a future
version of Neo4j; please use neo4j-admin import instead.
Input error: Expected '--relationships' to have at least 1 valid item, but had 0 []
Caused by:Expected '--relationships' to have at least 1 valid item, but had 0 []
java.lang.IllegalArgumentException: Expected '--relationships' to have at least 1 valid item, but had 0 []
at org.neo4j.kernel.impl.util.Validators.lambda$atLeast$6(Validators.java:125)
at org.neo4j.helpers.Args.validated(Args.java:640)
at org.neo4j.helpers.Args.interpretOptionsWithMetadata(Args.java:608)
at org.neo4j.tooling.ImportTool.extractInputFiles(ImportTool.java:508)
at org.neo4j.tooling.ImportTool.main(ImportTool.java:389)
at org.neo4j.tooling.ImportTool.main(ImportTool.java:334)
What is the cause of error and how should I load a csv file with header and data type correctly?
Edit:
New Input for cmd:
C:\Users\tsutomu\Desktop\MSS\Bachelorarbeit\neo4j-community-3.1.1\bin\neo4j-import --into graph.db --nodes:Person "file:c:/SavedNewest/people_header.csv,file:c:/SavedNewest/people.csv" --relationships:KNOWS "file:c:/SavedNewest/friendships_header.csv,file:c:/SavedNewest/friendships.csv"
The Error:
Input error: Directory of file:c:\SavedNewest\people_header.csv doesn't exist
Caused by:Directory of file:c:\SavedNewest\people_header.csv doesn't exist
java.lang.IllegalArgumentException: Directory of file:c:\SavedNewest\people_header.csv doesn't exist
at org.neo4j.kernel.impl.util.Validators.matchingFiles(Validators.java:48)
at org.neo4j.kernel.impl.util.Converters.lambda$regexFiles$7(Converters.java:76)
at org.neo4j.kernel.impl.util.Converters.lambda$toFiles$8(Converters.java:95)
at org.neo4j.helpers.Args.interpretOptionsWithMetadata(Args.java:608)
at org.neo4j.tooling.ImportTool.extractInputFiles(ImportTool.java:508)
at org.neo4j.tooling.ImportTool.main(ImportTool.java:388)
at org.neo4j.tooling.ImportTool.main(ImportTool.java:334)
Same error for powershell.
The path of people_header.csv: C:\SavedNewest\people_header.csv
Is there anything I should add to environmental path?
You have a couple of issues in your command line:
You cannot have embedded spaces in your command line arguments. For example, C:\SavedNewest\people_header.csv, C:\SavedNewest\people.csv should be a single argument, so you need to either remove the space after the comma or double-quote the entire argument.
The file path URLs must be formatted appropriately. To quote from the developer guide:
Make sure to use the right URLs esp. file URLs.+ On OSX and Unix use
file:///path/to/data.csv, on Windows, please use
file:c:/path/to/data.csv
I'm currently developing a bash script that makes a query to a server with curl, and returns a JSON object. I'm trying to parse that object with jsawk. For example, here is some data I'm trying to parse:
{
"account":{
"quota":20,
"email":"test#example.com",
"uuid":"12ae7a0cbd",
"email_verified":true
}
}
In terminal I'm running cat test.json | jsawk -q 'account.quota'.
Assume test.json is the above object. Every time I run that command, I always get the following error: jsawk: js error: ReferenceError: account is not defined, even though account is very clearly defined.
Try
<test.json jsawk 'return this.account.quota'
I removed your useless use of cat.