ALE Fixer Configuration in Neovim - function

I want to configure my Ruby fixer to perform the following sequence:
Turn syntax off
Run the rubocop fixer
Turn syntax on
According to :help ale-fix-configuration:
Synchronous functions and asynchronous jobs will be run in a sequence
for fixing files, and can be combined. For example:
let g:ale_fixers = {
\ 'javascript': [
\ 'DoSomething',
\ 'eslint',
\ {buffer, lines -> filter(lines, 'v:val !=~ ''^\s*//''')},
\ ],
\}
I tried to follow the example:
function! SyntaxTurnOff()
exec "syntax off"
endfunction
function! SyntaxTurnOn()
exec "syntax on"
endfunction
" FIXERS
let g:ale_fixers = {
\ '*': ['remove_trailing_lines', 'trim_whitespace'],
\ 'ruby': [
\ 'SyntaxTurnOff',
\ 'rubocop',
\ 'SyntaxTurnOn',
\],
\ 'python': ['flake8'],
\ 'json': ['jq'],
\}
However, when I try to execute it by calling :ALEFix in the editor, I get the following error:
Error detected while processing function ale#fix#Fix[37]..<SNR>305_RunFixer:
line 17:
E118: Too many arguments for function: SyntaxTurnOff
What am I doing wrong?

I found another way to make this work.
Rather than trying to sequence function calls within the fixers Object, I used autogroups instead.
First I defined the following functions:
function! SyntaxTurnOff()
"Turns syntax off only in current buffer
exec "syntax clear"
endfunction
function! SyntaxTurnOn()
exec "syntax on"
endfunction
Then, I used the built-in ALEFixPre and ALEFixPost autocommands:
augroup YourGroup
autocmd!
autocmd User ALEFixPre call SyntaxTurnOff()
autocmd User ALEFixPost call SyntaxTurnOn()
augroup END
My fixers are back to their previous, simple configuration.
" FIXERS
let g:ale_fixers = {
\ '*': ['remove_trailing_lines', 'trim_whitespace'],
\ 'ruby': ['rubocop'],
\ 'python': ['flake8'],
\ 'json': ['jq'],
\}
I'd be happy to hear of a better way, but this works for me, and I hope it helps someone else.
This came about because I had a 400-line file that was incredibly slow to fix, not because of rubocop, but because of syntax highlighting in Neovim. Before, running ALEFix would hold up that buffer for ages; now it's not instantaneous but it's pretty fast. To be fair, it's not due to ALEFix as such but rather to whatever Neovim has to do to redraw the buffer with syntax highlighting.

Related

Can't pass filepath as function argument in vim

I have this function in my vimrc:
function! MyFunc(fl)
:!cat fl
" :!cat a:fl
" :!cat &fl
endfunction
command! -nargs=1 RunFunc :call MyFunc(<f-args>)
The problem is when I run :RunFunc ~/scripts/0-test in vim command, I get the error:
cat: f: No such file or directory
shell returned 1
I have looked at various websites like this, this, this, this and this, but none worked for me.
First, you don't need that colon in a scripting context:
function! MyFunc(fl)
!cat fl
endfunction
command! -nargs=1 RunFunc :call MyFunc(<f-args>)
Second, you can't pass an expressions like that. You need to concatenate the whole thing with :help :execute:
function! MyFunc(fl)
execute "!cat " .. fl
endfunction
command! -nargs=1 RunFunc :call MyFunc(<f-args>)
Third, function arguments are typed with a::
function! MyFunc(fl)
execute "!cat " .. a:fl
endfunction
command! -nargs=1 RunFunc :call MyFunc(<f-args>)
As for websites… they are useless. Vim comes with an exhaustive documentation that should be your first hit when stumbling on something and it just so happens that the user manual—which is mandatory reading—has a whole chapter on writing vimscript: :help usr_41.txt.

Shell script function with global variable

yesterday I got a very easy task, but unfortunatelly looks like i can't do with a nice code.
The task briefly: I have a lot of parameters, that I want to ask with whiptail "interactive" mode in the installer script.
The detail of code:
#!/bin/bash
address="192.168.0.1" # default address, what the user can modify
addressT="" # temporary variable, that I want to check and modify, thats why I don't modify in the function the original variable
port="1234"
portT=""
... #there is a lot of other variable that I need for the installer
function parameter_set {
$1=$(whiptail --title "Installer" --inputbox "$2" 12 60 "$3" 3>&1 1>&2 2>&3) # thats the line 38
}
parameter_set "addressT" "Please enter IP address!" "$address"
parameter_set "portT" "Please enter PORT!" "$port"
But i got the following error:
"./install.sh: line: 38: address=127.0.0.1: command not found"
If I modify the variable to another (not a parameter of function), works well.
function parameter_set {
foobar=$(whiptail --title "Installer" --inputbox "$2" 12 60 "$3" 3>&1 1>&2 2>&3)
echo $foobar
}
I try to use global retval variable, and assign to outside of the function to the original variable, it works, but I think it's not the nicest solution for this task.
Could anybody help me, what I do it wrong? :)
Thanks in advance (and sorry for my bad english..),
Attila
It seems that your whiptail command is not producing anyoutput because of the redirections. So the command substitution leaves the value empty. Try removing them. Also it's better to save the new value to a local variable first:
parameter_set() {
local NAME=$1
local NEWVALUE=$(whiptail --title "Installer" --inputbox "$2" 12 60 "$3")
export $NAME="$NEWVALUE"
}

ruby: stripping backslashes from string

I'm running into an issue trying to parse through a predefined nessus xml report and dump the data into a mysql database. Some of the data i'm dumping into the database has the following characters which is making mysql barf obviously: ' " \
I am able to remove the single and double quotes, but is there a method to escape an escape? Keep in mind i dont have control over what is stored once it is iterated. Here's an example:
myvariable = "this is some bloated nessus output that has a bunch of crappy data and this domain\username"
myvariable.gsub!(/\\/, '')
The following gsub wont remove the backslash because it already thinks \u is escaped.
here is the actual code that is parsing the nessus xml file:
#!/usr/bin/ruby
#
# Database Schema:
# VALUES(Id (leave null), start time, hostname, host_ip, operating_system, scan_name, plugin_id, cve, cvss, risk, port, description, synopsis, solution, see_also, plugin_output, vuln_crit, vuln_high, vuln_med)
#
require 'mysql'
require 'nessus'
begin
con = Mysql.new 'yourdbhost', 'yourdbuser', 'yourpass', 'nessusdb'
scanTime = Time.now.to_i
Nessus::Parse.new("bloated.xml", :version => 2) do |scan|
scan.each_host do |host| # enumerate each host
start_time = host.start_time
next if host.event_count.zero? # skip host if there are no events to dump in the db
host.each_event do |event|
# '#{event.see_also.join('\s').gsub(/\"|\'|\\/, '')}'
# '#{event.solution.gsub!(/\"|\'|\\/, '')}'
# '#{event.synopsis.gsub!(/\"|\'|\\/, '')}'
con.query( \
"INSERT INTO nessus_scans VALUES \
(NULL, \
'#{scanTime}', \
'#{host.hostname}', \
'#{host.ip}', \
'#{host.operating_system}',\
'#{scan.title}', \
'#{event.plugin_id}', \
'#{event.cve}', \
'#{event.cvss_base_score}',\
'#{event.risk}', \
'#{event.port}', \
'#{event.description.gsub!(/\"|\'|\\/, '')}', \
NULL, \
NULL, \
NULL, \
NULL, \
NULL, \
NULL, \
NULL \)
")
end # end xml file iteration
end # end scan.each_host iteration
end # end host.each_event iteration
rescue Mysql::Error => e
puts e.errno
puts e.error
ensure
con.close if con
end
You have a gigantic SQL injection hole because you're not doing any escaping here. Using the MySQL driver directly is an extremely bad idea. At the very least use a database layer like Sequel or ActiveRecord. The singular reason why MySQL is "barfing" is because you're not using it correctly, you must escape.
The easiest fix for this mess is to use the escape_string method, but you need to do this for every single value, something that quickly becomes tedious. A proper database layer allows you to use parameterized queries that handle escaping for you, which is why I strongly encourage that.
for example, you can use brackets:
myvariable.gsub!(/[\\]+/, '')

How to parse json response in the shell script?

I am working with bash shell script. I need to execute an URL using shell script and then parse the json data coming from it.
This is my URL - http://localhost:8080/test_beat and the responses I can get after hitting the URL will be from either these two -
{"error": "error_message"}
{"success": "success_message"}
Below is my shell script which executes the URL using wget.
#!/bin/bash
DATA=$(wget -O - -q -t 1 http://localhost:8080/test_beat)
#grep $DATA for error and success key
Now I am not sure how to parse json response in $DATA and see whether the key is success or error. If the key is success, then I will print a message "success" and print $DATA value and exit out of the shell script with zero status code but if the key is error, then I will print "error" and print $DATA value and exit out of the shell script with non zero status code.
How can I parse json response and extract the key from it in shell script?
I don't want to install any library to do this since my JSON response is fixed and it will always be same as shown above so any simpler way is fine.
Update:-
Below is my final shell script -
#!/bin/bash
DATA=$(wget -O - -q -t 1 http://localhost:8080/tester)
echo $DATA
#grep $DATA for error and success key
IFS=\" read __ KEY __ MESSAGE __ <<< "$DATA"
case "$KEY" in
success)
exit 0
;;
error)
exit 1
;;
esac
Does this looks right?
If you are going to be using any more complicated json from the shell and you can install additional software, jq is going to be your friend.
So, for example, if you want to just extract the error message if present, then you can do this:
$ echo '{"error": "Some Error"}' | jq ".error"
"Some Error"
If you try this on the success case, it will do:
$echo '{"success": "Yay"}' | jq ".error"
null
The main advantage of the tool is simply that it fully understands json. So, no need for concern over corner cases and whatnot.
#!/bin/bash
IFS= read -d '' DATA < temp.txt ## Imitates your DATA=$(wget ...). Just replace it.
while IFS=\" read -ra LINE; do
case "${LINE[1]}" in
error)
# ERROR_MSG=${LINE[3]}
printf -v ERROR_MSG '%b' "${LINE[3]}"
;;
success)
# SUCCESS_MSG=${LINE[3]}
printf -v SUCCESS_MSG '%b' "${LINE[3]}"
;;
esac
done <<< "$DATA"
echo "$ERROR_MSG|$SUCCESS_MSG" ## Shows: error_message|success_message
* %b expands backslash escape sequences in the corresponding argument.
Update as I didn't really get the question at first. It should simply be:
IFS=\" read __ KEY __ MESSAGE __ <<< "$DATA"
[[ $KEY == success ]] ## Gives $? = 0 if true or else 1 if false.
And you can examine it further:
case "$KEY" in
success)
echo "Success message: $MESSAGE"
exit 0
;;
error)
echo "Error message: $MESSAGE"
exit 1
;;
esac
Of course similar obvious tests can be done with it:
if [[ $KEY == success ]]; then
echo "It was successful."
else
echo "It wasn't."
fi
From your last comment it can be simply done as
IFS=\" read __ KEY __ MESSAGE __ <<< "$DATA"
echo "$DATA" ## Your really need to show $DATA and not $MESSAGE right?
[[ $KEY == success ]]
exit ## Exits with code based from current $?. Not necessary if you're on the last line of the script.
You probably already have python installed, which has json parsing in the standard library. Python is not a great language for one-liners in shell scripts, but here is one way to use it:
#!/bin/bash
DATA=$(wget -O - -q -t 1 http://localhost:8080/test_beat)
if python -c '
import json, sys
exit(1 if "error" in json.loads(sys.stdin.read()) else 0)' <<<"$DATA"
then
echo "SUCCESS: $DATA"
else
echo "ERROR: $DATA"
exit 1
fi
Given:
that you don't want to use JSON libraries.
and that the response you're parsing is simple and the only thing you care about is the presence of substring "success", I suggest the following simplification:
#!/bin/bash
wget -O - -q -t 1 http://localhost:8080/tester | grep -F -q '"success"'
exit $?
-F tells grep to search for a fixed (literal) string.
-q tells grep to produce no output and instead only reflect via its exit code whether a match was found or not.
exit $? simply exits with grep's exit code ($? is a special variable that reflects the most recently executed command's exit code).
Note that if you all you care about is whether wget's output contains "success", the above pipeline will do - no need to capture wget's output in an aux. variable.

Capture command output inside zsh function

I'm trying to write a zsh function to get the path to a python module.
This works:
pywhere() {
python -c "import $1; print $1.__file__"
}
However, what I'd really like is the dir path without the filename. This doesn't work:
pywhere() {
dirname $(python -c "import $1; print $1.__file__")
}
Note: it works in bash, but not in zsh!
EDIT this is the error:
~ % pywhere() {
function → dirname $(python -c "import $1; print $1.__file__")
function → }
File "<string>", line 1
import pywhere() {
^
SyntaxError: invalid syntax
Your problem is due to a broken preexec: you aren't quoting the command line properly when you print it for inclusion in the window title.
In the .zshrc you posted, which is not the one you used (don't do that! Always copy-paste the exact file contents and commands that you used), I see:
a=${(V)1//\%/\%\%}
a=$(print -Pn "%40>...>$a" | tr -d "\n")
print -Pn "\ek$a:$3\e\\"
print -P causes prompt expansion. You include the command in the argument. You protect the % characters in the command by doubling them, but that's not enough. You evidently have the prompt_subst option turned on, so print -P causes the $(…) construct in the command line that defines the function to be executed:
python -c "import $1; print $1.__file__"
where $1 is the command line (the function definition: pywhere { … }).
Rather than attempt to parse the command line, print it out literally. This'll also correct other mistakes: beyond not taking prompt_subst into account, you doubled % signs but should have quadrupled them since you perform prompt expansion twice, and you expand \ sequences twice as well.
function title() {
a=${(q)1} # show control characters as escape sequences
if [[ $#a -gt 40 ]]; then a=$a[1,37]...; fi
case $TERM in
screen)
print -Pn "\ek"; print -r -- $a; print -Pn ":$3\e\\";;
xterm*|rxvt)
print -Pn "\e]2;$2 | "; print -r -- $a; print -Pn ":$3\a";;
esac
}
Why not just use this:
python -c "import os, $1; print os.path.dirname($1.__file__)"