tblastn search using GI instead of fasta - blast

I have to perform a tblastn for some gi's. I've been testing my script with a fasta file, and it worked perfectly, but now, when I've tried with a GI number, it crashed.
tblastn -query 351738029 db /home/databases/nt/nt -out $output -show_gis
I've tried with just the number, with gi|351738029 but I fail to see where the problem is.
EDIT : I've also tried this
tblastn -query gi|351738029|gb|AEQ61064.1| db /home/databases/nt/nt -out $output -show_gis
But the "|" are interpreted as pipes. Also tried writing the whole GI between "", but useless.

I found that you cannot simply use as an argument the gi directly, it has to be inside of a file. The bright side is that you can use a file with several gi's, and it will query all of them.

Related

I am using identical syntax in jq to change JSON values, yet one case works while other turns bash interactive, how can I fix this?

I am trying to update a simple JSON file (consists of one object with several key/value pairs) and I am using the same command yet getting different results (sometimes even having the whole json wiped with the 2nd command). The command I am trying is:
cat ~/Desktop/config.json | jq '.Option = "klay 10"' | tee ~/Desktop/config.json
This command perfectly replaces the value of the minerOptions key with "klay 10", my intended output.
Then, I try to run the same process on the newly updated file (just value is changed for that one key) and only get interactive terminal with no result. ps unfortunately isn't helpful in showing what's going on. This is what I do after getting that first command to perfectly change the value of the key:
cat ~/Desktop/config.json | jq ‘.othOptions = "-epool etc-eu1.nanopool.org:14324 -ewal 0xc63c1e59c54ca935bd491ac68fe9a7f1139bdbc0 -mode 1"' | tee ~/Desktop/config.json
which I would have expected would replace the othOptions key value with the assigned result, just as the last did. I tried directly sending the stdout to the file, but no result there either. I even tried piping one more time and creating a temp file and then moving it to change to original, all of these, as opposed to the same identical command, just return > and absolutely zero output; when I quit the process, it is the same value as before, not the new one.
What am I missing here that is causing the same command with just different inputs (the key in second comes right after first and has identical structure, it's not creating an object or anything, just key-val pair like first. I thought it could be tee but any other implementation like a passing of stdout to file produces the same constant > waiting for a command, no user.
I genuinely looked everywhere I could online for why this could be happening before resorting to SE, it's giving me such a headache for what I thought should be simple.
As #GordonDavisson pointed out, using tee to overwrite the input file is a (well-known - see e.g. the jq FAQ) recipe for disaster. If you absolutely positively want to overwrite the file unconditionally, then you might want to consider using sponge, as in
jq ... config.json | sponge config.json
or more safely:
cp -p config.json config.json.bak && jq ... config.json | sponge config.json
For further details about this and other options, search for ‘sponge’ in the FAQ.

Function to open a file and navigate to a specified line number

I have the output of recursive grep (actually ag) in a buffer, which is of the form filename:linenumber: ... [match] ..., and I want to be able to go to the occurrence (file and line number) currently under the cursor. This told me that I could execute normal-mode movements, so after extracting the file:line portion, I wrote this function:
function OpenFileNewTab(name)
let l:pair=split(a:name, ":")
execute "tabnew" get(l:pair, 0)
execute "normal!" get(l:pair, 1) . "G"
endfunction
It is supposed to open the specified file in a tab and then do <lineno>G, like I am able to do manually, to go to the specified line number. However, the cursor just stays on line 1. What am I doing wrong?
This question, by title alone, would be an exact duplicate, but it talks locating symbols in other files, while I already have the locations at hand.
Edit: My mappings for grep / ag are as follows:
nnoremap <Leader>ag :execute "new \| read !ag --literal -w" "<C-r><C-w>" g:repo \| :set filetype=c<CR>
nnoremap <Leader>gf ^v2t:"zy :execute OpenFileNewTab("<C-r>z")<CR>
To get my grep / ag results, I put the cursor on the word I want to search and enter <leader>ag, then, in the new buffer, I put the cursor on a line and enter <leader>gf - it selects from the start up to the second colon and calls OpenFileNewTab.
Edit 2: I'm on Cygwin, if it is of any importance - I doubt it.
Why don't you set &grepprg to call ag ?
" according to man ag
set grepprg=ag\ --vimgrep\ $*
set grepformat=%f:%l:%c:%m
" And then (not tested)
nnoremap <Leader>ag :grep -w <c-r><c-w><cr>
As others have said in the comments, you are just trying to emulate what the quickfix windows already provides. And, we are lucky vim can call grep, and it has a variation point to let us specify which grep program we wish to use: 'grepprg'.
Use file-line plugin. Pressing Enter on a line in the quicklist will normally open that file; file-line will make any filename of the form file:line:column (and several other formats) to open file and position to line and column.
I only found this (old) thread after I posted the exact same question on vi.stackexchange: https://vi.stackexchange.com/q/39557/44764. To help anyone who comes looking, I post the best answer to my question below as an alternative to the answers already given.
The gF command, like gf, opens the file in a new tab but additionally it also positions the cursor on the line after the colon. (I note the OP defines <leader>gf so maybe vim/neovim didn't auto-define gf or gF at the time this thread was originally created.)

Extracting CREATE TABLE definitions from MySQL dump?

I have a MySQL dump file over 1 terabyte big. I need to extract the CREATE TABLE statements from it so I can provide the table definitions.
I purchased Hex Editor Neo but I'm kind of disappointed I did. I created a regex CREATE\s+TABLE(.|\s)*?(?=ENGINE=InnoDB) to extract the CREATE TABLE clause, and that seems to be working well testing in NotePad++.
However, the ETA of extracting all instances is over 3 hours, and I cannot even be sure that it is doing it correctly. I don't even know if those lines can be exported when done.
Is there a quick way I can do this on my Ubuntu box using grep or something?
UPDATE
Ran this overnight and output file came blank. I created a smaller subset of data and the procedure is still not working. It works in regex testers however, but grep is not liking it and yielding an empty output. Here is the command I'm running. I'd provide the sample but I don't want to breach confidentiality for my client. It's just a standard MySQL dump.
grep -oP "CREATE\s+TABLE(.|\s)+?(?=ENGINE=InnoDB)" test.txt > plates_schema.txt
UPDATE
It seems to not match on new lines right after the CREATE\s+TABLE part.
You can use Perl for this task... this should be really fast.
Perl's .. (range) operator is stateful - it remembers state between evaluations.
What it means is: if your definition of table starts with CREATE TABLE and ends with something like ENGINE=InnoDB DEFAULT CHARSET=utf8; then below will do what you want.
perl -ne 'print if /CREATE TABLE/../ENGINE=InnoDB/' INPUT_FILE.sql > OUTPUT_FILE.sql
EDIT:
Since you are working with a really large file and would probably like to know the progress, pv can give you this also:
pv INPUT_FILE.sql | perl -ne 'print if /CREATE TABLE/../ENGINE=InnoDB/' > OUTPUT_FILE.sql
This will show you progress bar, speed and ETA.
You can use the following:
grep -ioP "^CREATE\s+TABLE[\s\S]*?(?=ENGINE=InnoDB)" file.txt > output.txt
If you can run mysqldump again, simply add --no-data.
Got it! grep does not support matching across multiple lines. I found this question helpul and I ended up using pcregrep instead.
pcregrep -M "CREATE\s+TABLE(.|\n|\s)+?(?=ENGINE=InnoDB)" test.txt > plates.schema.txt

tcl open pipe seems to misshandle spaces in parameters

I have this open:
set r [catch {open "|[concat $config(cmd,sh) [list $cmd 2>#1]]" r} fid]
where $config(cmd,sh) is cmd /c and I am trying to pass a file name (and possibly a command such as echo) in $cmd. If there is no space in the file name, i.e. :
cmd is echo /filename
all is well. With a space, i.e.:
cmd is echo "/file name"
what appears to be passed is:
\"file name\".
When I try this on Linux, I get "file name" (no backslashes). I have tried replacing the spaces in the file name with "\ ", but then the target gets two file names, i.e. the space is used to break up the file name.
I am beginning to think I have found a bug in the Windows port of Tcl...
Ugh, that looks convoluted! To pass this sort of thing into the pipe creation code, you need to use exactly the right recipe:
set r [catch {open |[list {*}$config(cmd,sh) $cmd 2>#1] r} fid]
That is, always use the form with |[list ...] when building pipes as the documentation says that is what the pipe opener looks for. (This is the only command like that in Tcl.)
And of course, using the (8.5+) {*} syntax is much simpler in this case too, as it is more obviously doing the right thing.

Replacing output text of a command with a string in a Shell Script

Hello and thank you for any help you can provide
I have my Apache2 web server set up so that when I go to a specific link, it will run and display the output of a shell script stored on my server. I need to output the results of an SVN command (svn log). If I simply put the command 'svn log -q' (-q for quiet), I get the output of:
(of course not blurred), and with exactly 72 dashes in between each line. I need to be able to take these dashes, and turn them into an html line break, like so:
Basically I need the shell script to take the output of the 'svn log -q' command, search and replace every chunk of 72 dashes with an html line break, and then echo the output.
Is this at all possible?
I'm somewhat a noob at shell scripting, so please excuse any mess-ups.
Thank you so much for your help.
svn log -q | sed -e 's,-{72},<br/>,'
If you want to write it in the script this might help:
${string//substring/replacement}
Replace all matches of $substring with $replacement.
stringZ=abcABC123ABCabc
echo ${stringZ/abc/xyz} # xyzABC123ABCabc
# Replaces first match of 'abc' with 'xyz'.
echo ${stringZ//abc/xyz} # xyzABC123ABCxyz
# Replaces all matches of 'abc' with # 'xyz'.