wget SSRS report - reporting-services

I'm trying to pull a report down using the following:
https://user:password#domain.com/ReportServer?%2fFolder+1%2fReportName&rs:Format=CSV&rs:Command=Render
And it just pulls an html page and not the csv file. Any ideas?

What does the HTML file say? Something like "acess denied"? And while you're at it, try
wget --user bob --password 123456 'https://domain.com/ReportServer?%2fFolder+1%2fReportName&rs:Format=CSV&rs:Command=Render'
Make sure you are using quotes. Otherwise, the shell will cut off the command before the first ampersand.

Related

Use cURL to pull data from Web into excel

Guys I'm currently working with cURL for the first time.
What I am trying to do is pull data from a website usimg cURL
an put them into excel using the following commmand. I have to use an API-Key
to get the data.
curl -H "X-API-Key: API_KEY_NUMBER" http://example.com/api/exports/model/62f0d0dc24757f6e5bb0b723 -o "text.xlsx"
This works fine so far, the problem is that if want to open it in Excel it tells me that the file can not be opened because the file format or the file extension is invalid.
If i change the file extension to
curl -H "X-API-Key: API_KEY_NUMBER" http://example.com/api/exports/model/62f0d0dc24757f6e5bb0b723 -o "text.txt"
it opens in a text file but with all the data that i need. Now I am looking for a way to solve this.

Call external program from mysql

How can I call an external program from mysql?
I am a complete beginner at this, on Linux Mint 20, I created a database of all my video files, the paths of the videos are all listed in a table.
I can access the DB using Bash with:
mysql -u root -proot -e "use collection; select path from videos where path Like '%foo%' or path Like '%bar%'"
To search for what I want, but now I want to pipe the chosen vid(s) to MPV/VLC, whatever.
Apart from the fact I am doing it as root, am I going about this the wrong way?
I just want to perform quick searches in a terminal, then fire up the vid(s).
Thanks a lot, folks.
If I'm understanding correctly. You want to query your db for a specific type of file or path and then you want to use the result of your query to open up the files?
You don't open the program from MySQL, but you could open it from bash.
Figure out what the bash command is to open that program and use the output of your query to run over a loop in bash to open, one by one, the results you got from your query.
Alternatively you can output the results to a temporary file and read from it with bash:
mysql -user -pass -e "YOUR QUERY" > /tmp/output.txt
If you can get the right output in your output.txt file, I would look into reading from that file in bash with a loop. Something like:
while IFS= read -r line
do
mpv "$line"
done < output.txt

How to translate my cURL command into Chrome command?

I want to fire a POST request in command line, to post my image to a image searching site. At first, I tried cURL and get this command which works:
curl -i -X POST -F file=#search.png http://saucenao.com/search.php
It will post a file in FORM to the searching site and returns a HTML page result full with JavaScript which makes it hard to read in terminal. And it's also hard to preview online image in terminal.
Then I remember that I can open Chrome with arguments in command line, which I think may solve my problem. After some digging, I found Chrome switches, but seams it's just about Chrome starting flags (I'm not sure is this right, but I didn't find how to fire a post request like cURL do.)
So, can I use Chrome in command line to start it with a POST request just like my cURL command above?
There are a couple of things you could do.
You could write a script in JavaScript that will send the POST request and display the results inside the <body> element or the like;
You could keep the cURL command and use the -o (or --output) to save the resulting HTML in a file (but lose the -i switch, to avoid having the headers in the file), then open the file in Chrome or whichever browser you prefer. You could combine the two commands as a one-liner in any operating system. If you use Ubuntu, for example:
$ curl -o search.html -X POST -F file=#search.png http://saucenao.com/search.php && google-chrome search.html && rm search.html
According to this answer you could use bcat in order to avoid using a temporary file. Install it by apt-get install ruby-bcat and then just run
$ curl -X POST -F file=#search.png http://saucenao.com/search.php | bcat
I think the easier option is #2, but whichever you prefer.

Wget copy text from html

I'm really new to programming and linux/unix so I was wondering what command I can use to copy the text only of a webpage and save it in a file in the directory. I want to copy the text of something like this
http://cseweb.ucsd.edu/classes/wi12/cse130-a/pa5/words
would wget do it? also what specific commands get it saved into the directory?
Another option using wget like you wondered about would be:
wget -O file.txt "http://cseweb.ucsd.edu/classes/wi12/cse130-a/pa5/words"
The -O option lets you specify which file name you want to save it to.
One option would be:
curl -s http://cseweb.ucsd.edu/classes/wi12/cse130-a/pa5/words > file

download html page for offline use

I want to make an html page available for offline viewing by downloading the html and all images / css resources from it, but not other pages which are links.
I was looking at httrack and wget but could not find the right set of arguments (I need the command line).
Any ideas?
If you want to download using the newest version of wget, get it using cygwin installer
and use this version
wget -m –w 2 –p -E -k –P {target-dir} http://{website}
to mirror {website} to {target-dir} (without images in 1.11.4).
Leave out -w 2 to speed up the progress.
For one page, the following wget command line parameters should be enough. Please keep in mind that it might not download everything including background images attached to CSS files etc.
wget -p <webpage>
Also try wget --help for a list of all command line parameters.