How to show all pages for a search result from leiningen? - leiningen

I can search for packages in leiningen in the console by executing
$ lein search jdbc
However this only returns the first page out of many. I would like to get all pages and just grep search for the stuff I want. Is there a way to do that?

You could write a script of some sort that iterated through
lein search jdbc $i
, where i is incremented starting at 1, but the loop finishes when the answer is just "Searching over Artifact ID..."
Your script would have to be concatenating the results together to give you a string to grep from.
Ordinary answer I know...

Related

Can you set an artificial starting point in your code in Octave?

I'm relatively new to using Octave. I'm working on a project that requires me to collect the RGB values of all the pixels in a particular image and compare them to a list of other values. This is a time-consuming process that takes about half a minute to run. As I make edits to my code and test it, I find it annoying that I need to wait for 30 seconds to see if my updates work or not. Is there a way where I can run the code once at first to load the data I need and then set up an artificial starting point so that when I rerun the code (or input something into the command window) it only runs a desired section (the section after the time-consuming part) leaving the untouched data intact?
You may set your variable to save into a global variable,
and then use clear -v instead of clear all.
clear all is a kind of atomic bomb, loved by many users. I have never understood why. Hopefully, it does not close the session: Still some job for quit() ;-)
To illustrate the proposed solution:
>> a = rand(1,3)
a =
0.776777 0.042049 0.221082
>> global a
>> clear -v
>> a
error: 'a' undefined near line 1, column 1
>> global a
>> a
a =
0.776777 0.042049 0.221082
Octave works in an interactive session. If you run your script in a new Octave session each time, you will have to re-compute all your values each time. But you can also start Octave and then run your script at the interactive terminal. At the end of the script, the workspace will contain all the variables your script used. You can type individual statements at the interactive terminal prompt, which use and modify these variables, just like running a script one line at the time.
You can also set breakpoints. You can set a breakpoint at any point in your script, then run your script. The script will run until the breakpoint, then the interactive terminal will become active and you can work with the variables as they are at that point.
If you don't like the interactive stuff, you can also write a script this way:
clear
if 1
% Section 1
% ... do some computations here
save my_data
else
load my_data
end
% Section 2
% ... do some more computations here
When you run the script, Section 1 will be run, and the results saved to file. Now change the 1 to 0, and then run the script again. This time, Section 1 will be skipped, and the previously saved variables will be loaded.

Output last command

I am using a function that I found in YADR which should insert the output of the last command.
# Use Ctrl-x,Ctrl-l to get the output of the last command
zmodload -i zsh/parameter
insert-last-command-output() {
LBUFFER+="$(eval $history[$((HISTCMD-1))])"
}
zle -N insert-last-command-output
bindkey "^X^L" insert-last-command-output
For some reason, it does not seem to work by pressing ctrl-x ctrl-l but running
echo $(eval $history[$((HISTCMD-1))])
command on the terminal does produce the output of the last command.
Running bindkey -M viins shows "^X^L" insert-last-command-output
as one of the entries. Therefore, the function is registered.
I don't really understand how the function works. I think that the variable LBUFFER holds the output of all last commands but when I echo $LBUFFER, it returns the function code.
Can anyone help me get this working?
I finally found a solution.
I had been trying to use the shortcut inside tmux which did not work. However, outside tmux, everything worked. It turns out that tmux will not allow a shortcut with two keys. I changed the shortcut to just alt-L and everything works.

Parsing json output for hive

I need to automatically move new cases (TheHive-Project) to LimeSurvey every 5 minutes. I have figured out the basis of the API script to add responses to LimeSurvey. However, I can't figure out how to add only new cases, and how to parse the Hive case data for the information I want to add.
So far I've been using curl to get a list of cases from hive. The following is the command and the output.
curl -su user:pass http://myhiveIPaddress:9000/api/case
[{"createdBy":"charlie","owner":"charlie","createdAt":1498749369897,"startDate":1498749300000,"title":"test","caseId":1,"user":"charlie","status":"Open","description":"testtest","tlp":2,"tags":[],"flag":false,"severity":1,"metrics":{"Time for Alert to Handler Pickup":2,"Time from open to close":4,"Time from compromise to discovery":6},"updatedBy":"charlie","updatedAt":1498751817577,"id":"AVz0bH7yqaVU6WeZlx3w","_type":"case"},{"createdBy":"charlie","owner":"charlie","title":"testtest","caseId":3,"description":"ddd","user":"charlie","status":"Open","createdAt":1499446483328,"startDate":1499446440000,"severity":2,"tlp":2,"tags":[],"flag":false,"id":"AV0d-Z0DqHSVxnJ8z_HI","_type":"case"},{"createdBy":"charlie","owner":"charlie","createdAt":1499268177619,"title":"test test","user":"charlie","status":"Open","caseId":2,"startDate":1499268120000,"tlp":2,"tags":[],"flag":false,"description":"s","severity":1,"metrics":{"Time from open to close":2,"Time for Alert to Handler Pickup":3,"Time from compromise to discovery":null},"updatedBy":"charlie","updatedAt":1499268203235,"id":"AV0TWOIinKQtYP_yBYgG","_type":"case"}]
Each field is separated by the delimiter },{.
In regards to parsing out specific information from each case, I previously tried to just use the cut command. This mostly worked until I reached "metrics"; it doesn't always work for metrics because they will not always be listed in the same order.
I have asked my boss for help, and he told me this command might get me going in the right direction to adding only new hive cases to the survey, but I'm still very lost and want to avoid asking too much again.
curl -su user:pass http://myhiveIPaddress:9000/api/case | sed 's/},{/\n/g' | sed 's/\[{//g' | sed 's/}]//g' | awk -F '"caseId":' {'print $2'} | cut -f 1 -d , | sort -n | while read line; do echo '"caseId":'$line; done
Basically, I'm in way over my head and feel like I have no idea what I'm doing. If I need to clarify anything, or if it would help for me to post what I have so far in my API script, please let me know.
Update
Here is the potential logic for the script I'd like to write.
get list of hive cases (curl ...)
read each field, delimited by },{
while read each field, check /tmp/addedHiveCases to see if caseId of field already exists
--> if it does not exist in file, add case to limesurvey and add caseId to /tmp/addedHiveCases
--> if it does exist, skip to next field
why are you thinking that the fields are separated by a "},{" delimiter?
The response of the /api/case API is a valid JSON format, that lists the cases.
Can you use a Python script to play with the API? If yes, I can help you write the script you need.

If in Loop in bash

This should be an incredibly easy question but I am not very familiar with bash and I am taking way longer than I should to figure it out.
declare -a ids=( 1 2 3 )
for i in "${ids[#]}";
do
re= $(mysql -h .... "SELECT col_A FROM DBA WHERE id=$i")
if [ $re -eq 0 ]; then
echo sucess
fi
done
This is an example of what I am trying to do, I have an id array and I want to send a query to my db so I can get a flag in the row with a certain id and then do something based on that. But I keep getting unexpected token errors and I am not entirely sure why
Edit: While copying the code and deleting some private information somehow I deleted the then, it was present in the code I was testing.
Based on what you described and the partial script, I am not certain I can completely create what you are trying to do but the token error messages you are experiencing usually have to do with the way bash handles whitespace as a delimiter. A few comments based on what you posted:
You need to remove the space around the equal sign in declaring an variable, so the space after the equal sign in re= needs to removed.
Because bash will is sensitive to whitespace, you need to quote variables declarations that might contain a space. To be safe, quotes need to be around the sub-shell $( )
You were missing the then in the if statement
It is important that variables in the test brackets, that is single [ ]s, must be quoted. Using an unquoted string with -eq, or even just the unquoted string alone within test brackets normally works, however, this is an unsafe practice and can give unpredictable results.
So, taking into account the items noted, the updated script would look something like:
declare -a ids=( 1 2 3 )
for i in "${ids[#]}";
do
re="$(mysql -h .... "SELECT col_A FROM DBA WHERE id=$i")"
if [ "$re" -eq "0" ]; then
echo "success"
fi
done
Can you try working the edits mentioned into your script and see if you are able to get it working? Remember, it will be helpful for you to use a site like ShellCheck to learn more about potential pitfalls or the uniquenesses of bash syntax. This will help to ensure you are working toward a solution to your specific need rather then getting trapped by some tricky syntax.
After you have worked through those edits, can you report back your experience?
EDIT
Based on your comments there is a good chance you are not running your script with bash despite the including #!/bin/bash at the top of your script. When you run the script as sh scriptname.sh you are forcing the script to be run by sh not bash. Try running your script like this /bin/bash scriptname.sh then report back on your experience.
For more information on the differences between various shells, see Unix/Linux : Difference between sh , csh , ksh and bash Shell
Your problem with your if statement is that you do not have the then keyword. A simple fix is:
declare -a ids=( 1 2 3 )
for i in "${ids[#]}";
do
re= $(mysql -h .... "SELECT col_A FROM DBA WHERE id=$i")
if [ $re -eq 0 ]; then
echo sucess
fi
done
Also here is a great reference on if statements in bash

Extracting CREATE TABLE definitions from MySQL dump?

I have a MySQL dump file over 1 terabyte big. I need to extract the CREATE TABLE statements from it so I can provide the table definitions.
I purchased Hex Editor Neo but I'm kind of disappointed I did. I created a regex CREATE\s+TABLE(.|\s)*?(?=ENGINE=InnoDB) to extract the CREATE TABLE clause, and that seems to be working well testing in NotePad++.
However, the ETA of extracting all instances is over 3 hours, and I cannot even be sure that it is doing it correctly. I don't even know if those lines can be exported when done.
Is there a quick way I can do this on my Ubuntu box using grep or something?
UPDATE
Ran this overnight and output file came blank. I created a smaller subset of data and the procedure is still not working. It works in regex testers however, but grep is not liking it and yielding an empty output. Here is the command I'm running. I'd provide the sample but I don't want to breach confidentiality for my client. It's just a standard MySQL dump.
grep -oP "CREATE\s+TABLE(.|\s)+?(?=ENGINE=InnoDB)" test.txt > plates_schema.txt
UPDATE
It seems to not match on new lines right after the CREATE\s+TABLE part.
You can use Perl for this task... this should be really fast.
Perl's .. (range) operator is stateful - it remembers state between evaluations.
What it means is: if your definition of table starts with CREATE TABLE and ends with something like ENGINE=InnoDB DEFAULT CHARSET=utf8; then below will do what you want.
perl -ne 'print if /CREATE TABLE/../ENGINE=InnoDB/' INPUT_FILE.sql > OUTPUT_FILE.sql
EDIT:
Since you are working with a really large file and would probably like to know the progress, pv can give you this also:
pv INPUT_FILE.sql | perl -ne 'print if /CREATE TABLE/../ENGINE=InnoDB/' > OUTPUT_FILE.sql
This will show you progress bar, speed and ETA.
You can use the following:
grep -ioP "^CREATE\s+TABLE[\s\S]*?(?=ENGINE=InnoDB)" file.txt > output.txt
If you can run mysqldump again, simply add --no-data.
Got it! grep does not support matching across multiple lines. I found this question helpul and I ended up using pcregrep instead.
pcregrep -M "CREATE\s+TABLE(.|\n|\s)+?(?=ENGINE=InnoDB)" test.txt > plates.schema.txt