I recently downloaded plenty(33000) pictures from a server, which hosts a website that I run. Many of the pictures have gibberish naming, such as "Ч‘ЧђЧ ЧЁ-280x150.jpg".
These names were generally suppose to be in Hebrew but when I downloaded them from the server, their names became gibberish. I could of course just go over all the images and rename them using some gibberish translator, but I can't because there are thousands of images.
So I'm looking for a way to convert all the images with bad naming to images in Hebrew.
I don't have my gibberish-to-Hebrew translator with me, but this will give your images a number instead of a name...
#!/bin/bash
i=1
for f in *.jpg
do
newname=$(printf "%06d" $i)
echo mv "$f" "${newname}.jpg"
((i++))
done
Sample output:
mv 1500x1000.jpg 000001.jpg
mv 3000x2000.jpg 000002.jpg
mv a.jpg 000003.jpg
mv green.jpg 000004.jpg
mv new.jpg 000005.jpg
mv red.jpg 000006.jpg
Remove the word echo if you like the results.
Related
I have a large amount of text files stored on a Red Hat server that contain explicit Windows paths. Today, that path has changed and I would like to change the text files to reflect the new path. As they are Windows paths, they all contain single backslashes. I would like to maintain the single backslashes if possible.
I wanted to ask what the best method to perform this string replacement would be. I have made backups of folders so that I may test on a smaller scale before applying to the larger scale that will affect my group members.
Example:
Change $oldPath to $newPath in all *.py files recursively contained in current directory.
i.e. $oldPath\common\file_referenced should become $newPath\common\file_referenced
Robustly using any awk in any shell on every Unix box and regardless of which characters your old or new directory paths contain and whether or not the final directory in either old or new could be a substring of another existing directory name:
$ cat file
\old\fashioned\common\file_referenced
$ oldPath='\old\fashioned'
$ newPath='\new\fangled\etc'
$ awk '
BEGIN { old=ARGV[1]; new=ARGV[2]; ARGV[1]=ARGV[2]="" }
index($0"\\",old"\\")==1 { $0=new substr($0,length(old)+1) }
1' "$oldPath" "$newPath" file
\new\fangled\etc\common\file_referenced
To update all .py files in a directory you could use GNU awk for -i inplace, or you could do for i in *.py; do awk '...' old new "$i" > tmp && mv tmp "$i"; done, or you could use find and/or xargs, etc. - any of the common Unix ways to process multiple files with any command.
I have about 1.5k csv files and I need to stack them. (OS: win10)
How can i stack them using csvkit? (or maybe you can recommend something other that csvkit?)
I'm trying to the following. I created the following structure and write
cd files
for /r %i in (*) do csvstack -e utf-8 ../res.csv %i > ../res.csv
But it doesnt really work. Help please.
You can use
csvstack *.csv >./output.csv
cd .. goes one folder up.
Is there a (one-line) command to go n folders up?
You sure can define a function to do that:
$ go_up() { for i in $(seq $1); do cd ..; done }
$ go_up 3 # go 3 directories up
I am not aware of any command that does that, but it is easy to create one yourself. For example, just add
cdn() {
for ((i=0;i<${1-0};i++))
do
cd ..
done
}
in your ~/.bashrc file, and after you create a new shell you can just run
cdn N
and you will move up by N directories
All right, another really funny answer, that is really a one-liner, to go up 42 parent directories:
cd $(yes ../|head -42|tr -d \\n)
Same as gniourf_gniourf's other answer, it's cd - friendly (and it's just a couple characters longer than the shortest answer).
Replace 42 with your favorite number.
Now that you understood the amazing power of the wonderful command yes, you can join the dark side and use the evil command eval, and while we're at it we can use the terrible backticks:
eval `yes 'cd ..;'|head -42`
This is so far the shortest one-liner, but it's really bad: it uses eval, backticks and it's not cd - friendly. But hey, it works really well and it's funny!
you can use a singleline for loop:..
for i in {1..3}; do cd ../; done
replace the 3 with your n
for example:
m#mariachi:~/test/5/4/3/2/1$ pwd
/home/m/test/5/4/3/2/1
m#mariachi:~/test/5/4/3/2/1$ for i in {1..3}; do cd ../; done
m#mariachi:~/test/5/4$ pwd
/home/m/test/5/4
...however I don't think it will be much faster than typing cd and .. then hitting tab for each level you want to go up!! :)
How often do you go up more than five levels? If the answer is Not too often I suggest you place these cd - friendly aliases in your profile:
alias up2='cd ../..'
alias up3='cd ../../..'
alias up4='cd ../../../..'
alias up5='cd ../../../../..'
Advantages
No bashims, no zshisms, no kshisms.
Works with any shell supporting aliases
As readable and understandable as it gets.
A funny way:
cdupn() {
local a
[[ $1 =~ ^[[:digit:]]+$ ]] && printf -v a "%$1s" && cd "${a// /../}"
}
How does it work?
We first check that the argument is indeed a number (a chain of digits).
We populate the variable a with $1 spaces.
We perform the cd where each space in a has been replaced with ../.
Use as:
cdupn 42
to go up to forty-second parent directory.
The pro of this method is that you'll still be able to cd - to come back to previous directory, unlike the methods that use a loop.
Absolutely worth putting in your .bashrc. Or not.
I have a lot of .html files saved and I have already written a bunch of awk codes for Windows which do me the further text processing I need and works perfectly with one file, but I couldn't find a solution for reading the all of the files one ofter another and put it to results.txt?
awk -f C:/PLT2/parse.txt input_files > C:/PLT2/results.txt
This is a question for your OS, not for awk. The UNIX answer would be:
awk -f C:/PLT2/parse.txt input_file1 input_file2 input_fie3.... > C:/PLT2/results.txt
but you're not using UNIX, so....
So the utility Diff works just like I want for 2 files, but I have a project that requires comparisons with more than 2 files at a time, maybe up to 10 at a time. This requires having all those files side by side to each other as well. My research has not really turned up anything, vimdiff seems to be the best so far with the ability to compare 4 at a time.
My question: Is there any utility to compare more than 2 files at a time, or a way to hack diff/vimdiff so it can do multiple comparisons? The files I will be comparing are relatively short so it should not be too slow.
Displaying 10 files side-by-side and highlighting differences can be easily done with Diffuse. Simply specify all files on the command line like this:
diffuse 1.txt 2.txt 3.txt 4.txt 5.txt 6.txt 7.txt 8.txt 9.txt 10.txt
Vim can already do this:
vim -d file1 file2 file3
But you're normally limited to 4 files. You can change that by modifying a single line in Vim's source, however. The constant DB_COUNT defines the maximum number of diffed files, and it's defined towards the top of diff.c in versions 6.x and earlier, or about two thirds of the way down structs.h in versions 7.0 and up.
diff has built-in option --from-file and --to-file, which compares one operand to all others.
--from-file=FILE1
Compare FILE1 to all operands. FILE1 can be a directory.
--to-file=FILE2
Compare all operands to FILE2. FILE2 can be a directory.
Note: argument name --to-file is optional.
e.g.
# this will compare foo with bar, then foo with baz .html files
$ diff --from-file foo.html bar.html baz.html
# this will compare src/base-main.js with all .js files in git repo,
# that has 'main' in their filename or path
$ git ls-files :/*main*.js | xargs diff -u --from-file src/base-main.js
Checkout "Beyond Compare": http://www.scootersoftware.com/
It lets you compare entire directories of files, and it looks like it runs on Linux too.
if your running multiple diff's based off one file you could probably try writing a script that has a for loop to run through each directory and run the diff. Although it wouldn't be side by side you could at least compare them quickly. hope that helped.
Not answering the main question, but here's something similar to what Benjamin Neil has suggested but diffing all files:
Store the filenames in an array, then loop over the combinations of size two and diff (or do whatever you want).
files=($(ls -d /path/of/files/some-prefix.*)) # Array of files to compare
max=${#files[#]} # Take the length of that array
for ((idxA=0; idxA<max; idxA++)); do # iterate idxA from 0 to length
for ((idxB=idxA + 1; idxB<max; idxB++)); do # iterate idxB + 1 from idxA to length
echo "A: ${files[$idxA]}; B: ${files[$idxB]}" # Do whatever you're here for.
done
done
Derived from #charles-duffy's answer: https://stackoverflow.com/a/46719215/1160428
There is a simple an good way to do this = GREP.
Depending on the size of the text you can copy and paste it, or you can redirect the input of the file to the grep command. If you make a grep -vir /path to make a reverse search or a grep -ir /path. This is my way for certification exams.