I want to fire a POST request in command line, to post my image to a image searching site. At first, I tried cURL and get this command which works:
curl -i -X POST -F file=#search.png http://saucenao.com/search.php
It will post a file in FORM to the searching site and returns a HTML page result full with JavaScript which makes it hard to read in terminal. And it's also hard to preview online image in terminal.
Then I remember that I can open Chrome with arguments in command line, which I think may solve my problem. After some digging, I found Chrome switches, but seams it's just about Chrome starting flags (I'm not sure is this right, but I didn't find how to fire a post request like cURL do.)
So, can I use Chrome in command line to start it with a POST request just like my cURL command above?
There are a couple of things you could do.
You could write a script in JavaScript that will send the POST request and display the results inside the <body> element or the like;
You could keep the cURL command and use the -o (or --output) to save the resulting HTML in a file (but lose the -i switch, to avoid having the headers in the file), then open the file in Chrome or whichever browser you prefer. You could combine the two commands as a one-liner in any operating system. If you use Ubuntu, for example:
$ curl -o search.html -X POST -F file=#search.png http://saucenao.com/search.php && google-chrome search.html && rm search.html
According to this answer you could use bcat in order to avoid using a temporary file. Install it by apt-get install ruby-bcat and then just run
$ curl -X POST -F file=#search.png http://saucenao.com/search.php | bcat
I think the easier option is #2, but whichever you prefer.
Related
I am failing to download the below website using wget in bash:
wget --wait 1 -x -H -mk https://bittrex.com/api/v1.1/public/getorderbook?market=usdt-btc&type=sell
I have found some similar issues in other questions, however, their solution was to use -mk, which makes no difference here. The prompt just freeze after the command and nothing happens. If I try to open the same website in a browser, it will open normally. I would be grateful for any help here.
As already pointed out in a comment, you need to quote your URL.
The & in the URL is putting wget --wait 1 -x -H -mk https://bittrex.com/api/v1.1/public/getorderbook?market=usdt-btc into the background and causes everything after the & to be interpreted as a command, which can be a great risk depending on the URL!
If your URL contains a $ you should use single-quotes (') to pass the string literally without variable expansion, otherwise double quotes (") are fine.
Is there a way to change the repository URL of a Hudson job using the Hudson CLI API?
There is no way to change the repository URL using the Hudson CLI. However there is a workaround that can be automated with little effort.
Workaround:
We can use cURL to download the config.xml of the job using the following command (note that in order to run cURL commands you have to setup cURL):
curl -X GET http://your-hudson-server/job/TheNameOfTheJob/config.xml -o localCopy.xml
The configuration file will contain something similar to this (depending on the Version Control used):
<scm-property>
<originalValue class="hudson.scm.SubversionSCM">
<locations>
<hudson.scm.SubversionSCM_-ModuleLocation>
<remote>https://your-repository</remote>
The value of the <remote> tag is the repository url (also check the credentials for the new repository).
There are several cURL ways to submit the modified version of config.xml back on the server. One way is:
curl -X POST http://your-hudson-server/job/TheNameOfTheJob/config.xml --data-binary "#newconfig.xml"
so I am new to curl and am trying to write a bash script that will I can run that downloads a file. So I start off by authentication then make a POST to request a download. I am given a Foo_ID in which I parse using bash and set to a parameter. I then try to use GET the certain Foo data via a download URL. The issue I am having is that whenever I pass in the parameter I parsed from the POST response I get nothing. Here is an example of what I am doing.
#!/bin/bash
curl -b cookies -c cookies -X POST #login_info -d "https://foo.bar.com/auth"
curl -b cookies -c cookies -X POST #Foo_info -d "https://foo.bar.com/foos" > ./tmp/stuff.t
myFooID=`cat ./tmp/stuff.t |grep -Po '"foo_id:.*?",'|cut -d '"' -f 4`
curl -b cookies -c cookies "http://foo.bar.com/foo-download?id=${myFooID}" > ./myFoos/Foo1.data
I have echo'd myFooID to make sure it is correct and it is. I have also echo'd "https://foo.bar.com/foo-download?id=${myFooID}" and it is properly showing the URL I need. Could anyone help me with this like I said I am new to using curl and a little rusty on using bash commands.
So I have solved the issue. The problem was after doing my post for the Foo I didn't give enough time for my foo to be created before trying to download it. I added a sleep command between both the last two curl commands and now it works perfectly. I would like to thank Dennis Williamson for helping me clean up my code wich led me to understanding my issue. I have created a shrine for him on my desk.
I want to make an html page available for offline viewing by downloading the html and all images / css resources from it, but not other pages which are links.
I was looking at httrack and wget but could not find the right set of arguments (I need the command line).
Any ideas?
If you want to download using the newest version of wget, get it using cygwin installer
and use this version
wget -m –w 2 –p -E -k –P {target-dir} http://{website}
to mirror {website} to {target-dir} (without images in 1.11.4).
Leave out -w 2 to speed up the progress.
For one page, the following wget command line parameters should be enough. Please keep in mind that it might not download everything including background images attached to CSS files etc.
wget -p <webpage>
Also try wget --help for a list of all command line parameters.
I am quite new with wget and I have done my research on Google but I found no clue.
I need to save a single HTML file of a webpage:
wget yahoo.com -O test.html
and it works, but, when I try to be more specific:
wget http://search.yahoo.com/404handler?src=search&p=food+delicious -O test.html
here comes the problem, wget recognizes &p=food+delicious as a syntax, it says: 'p' is not recognized as an internal or external command
How can I solve this problem? I really appreciate your suggestions.
The & has a special meaning in the shell. Escape it with \ or put the url in quotes to avoid this problem.
wget http://search.yahoo.com/404handler?src=search\&p=food+delicious -O test.html
or
wget "http://search.yahoo.com/404handler?src=search&p=food+delicious" -O test.html
In many Unix shells, putting an & after a command causes it to be executed in the background.
Wrap your URL in single quotes to avoid this issue.
i.e.
wget 'http://search.yahoo.com/404handler?src=search&p=food+delicious' -O test.html
if you are using a jubyter notebook, maybe check if you have downloaded
pip install wget
before warping from URL