How to select directories in bash from sql table - mysql

I have a directory containing more than 1100 directories, i want to move about 400 directories which name i have stored in a sql table? Is there a way to achieve this? I have searched on google but i can't find anything. Maybe one possiblity should be to export the table records to a text file but i still don't know how to connect the text file to the directories. Thanks.

#!/bin/bash
DIRLIST='file'
SOURCE='/my/source/directory'
TARGET='/my/target/directory'
while read -r dir; do
echo mv "$SOURCE/$dir" "$TARGET"
done < "$DIRLIST"
where file contains
directory1
directory2
directory3
(customize the example to your specific taste, and remove the echo statement in front of the mv after testing)

Related

How to split string in excel file with bash script?

Good Afternoon
I am trying to develop a bash script which fetches data from a database and then fills an csv file with said data.
So far i have managed to just that but the way the data is presented is not good: all the data is written in one single cell like so:
and i would like for the data to be presented like this:
Here is my bash script code so far:
#! /bin/bash
currentDate=`date`
mysql -u root -p -D cms -e 'SELECT * from bill' > test_"${currentDate}".csv
Can anyone of you tell me what bash commands i can use to achieve the desired result?
Running the cat command of the file gives the following result:
thank you in advance
Using sed, you can change the delimiter from the output displayed in your image (please use text in the future)
$ sed 's/ \+/,/g' test.csv
If happy with the output, you can then save the file in place.
$ sed -i 's/ \+/,/g' test.csv
You should now have the output in different cells when opened in excel
Data appears to be tab-delimited (cat -T test.csv should show a ^I between each column); I believe excel's default behavior when opening a .csv file is to parse the file based on a comma delimiter.
To override this default behavior and have excel parse the file based on a different delimiter (tab in this case):
open a clean/new worksheet
(menu) DATA -> From Text (file browser should pop up)
select test.csv and hit Import (new pop up asks for details on how to parse)
make sure Delimited radio button is chosen (the default), hit Next >
make sure Tab checkbox is selected (the default), hit Next >
verify the format in the Data preview window (# bottom of pop up) and if ok then hit 'Finish'
Alternatively, save the file as test.txt and upon opening the file with excel you should be prompted with the same pop ups asking for parsing details.
I'm not a big excel user so I'm not sure if there's a way to get excel to automatically parse your files based on tabs (a google/web search will likely provide more help at this point).

how to restore multiple sql file to different database name for each file in mysql?

I have hundreds of SQL file which I want to restore all of the databases in different database name for each file.
I look around for a solution, but what I got is something like concat all the files into one SQL file using cat.* and then restore using the concatenated file.
But, what I want is to restore it to a different database so, I think concat is not suitable for my case.
Here's one solution: alternate USE commands with your sql files, so you change the default database before the respective database's content. Gather the whole collection together and then pipe that to the input of the mysql client.
Example using bash syntax:
(
echo "USE database1;"
cat file1.sql
echo "USE database2;"
cat file2.sql
...
) | mysql
Another solution is to run the mysql client once for each file, and specify the database name as the argument:
mysql database1 < file1.sql
mysql database2 < file2.sql
...
Re your comment:
You can write a loop in bash too.
for file in *.sql
do
db=...
mysql $db < $file
done
The tricky part above is the "..." — deciding which db goes with each input SQL file. You haven't described any way to match them, so I don't know what you'd have to do to figure that out. But if you can make that inference somehow from the filename, then you can do this without having to type every file.

Csv parser - Evaluate header for each file

I have multiple CSV files in a directory. They may have different column combinations, but I would like to COPY them all with a single command, as there is a lot of them and they all go into same table. But the FDelimitedParser only evaluates the header row for the first file, then rejects all rows that do not fit - ie. all rows from most of the other files. I've been using FDelimitedParser but anything else is fine.
1 - Is this expected behavior, and if so, why ?
2 - I want it to evaluate the headers for each file, is there a way ?
Thanks
(Vertica 7.2)
Looks like you need flexTable for that , see http://vertica-howto.info/2014/07/how-to-load-csv-files-into-flex-tables/
Here's a small workaround that I use when I need to load a bunch of files in at once. This assumes all your files have the same column order.
Download and run Cygwin
Navigate to folder with csv files
cd your_folder_name_with_csv_files
Combine all csv files into a new file
cat *.csv >> new_file_name.csv
Run a copy statement in Vertica from new file. If file headers are an issue, you can follow instructions on this link and run through Cygwin to remove the first line from every file.

Find and move files that are NOT in a text list Linux

I am cleaning up the storage of an old database and need to remove all of the files that are currently not in use.
I have a list of the files and paths that are active and would like to move all of the files that are not active into a single location where I can review and then delete them if not needed.
I am running mySQL 5.0 on RHEL5
How can I use xargs or find to locate the paths/files that are not on activefiles.txt
All help is much appreciated. Thank you.
UPDATED BELOW:
Let me try and be more clear. I have a mysql database which contains the path and filename in one of the tables.
mysql> select FilePath from metadata;
+-------------------------------------------------------+
| FilePath |
+-------------------------------------------------------+
| ./sample/XYZ/filename1 |
| ./sample/XYZ/filename2 |
| ./sample/XYZ/filename3 |
+-------------------------------------------------------+
3 rows in set (0.00 sec)
What I need to do is place this column into a text document and then remove all the subdirectories and files in directory XYZ that are NOT on this list.
For example:
$mysql -u root -e ‘select FilePath from database.metadata;’ > deletelist.txt
$xargs rm > deletelist.txt
This would remove all of the files returned from the mysql query.
What I want to do is remove all of the files in the same subdirectory that are NOT in deletelist.txt
Hope thats a little more clear
My suggestion would be: create a temporary folder to hold the files you want and move them there. Then move the remaining files in the folder somewhere else, move the original files back, and delete the unwanted files. This way you never have to enumerate the unwanted files (you can let the shell do it).
Note that in most filesystems, moving files is cheap - it doesn't require copying the bits, just updating directories.

Manipulating giant MySQL dump files

What's the easiest way to get the data for a single table, delete a single table or break up the whole dump file into files each containing individual tables? I usually end up doing a lot of vi regex munging, but I bet there are easier ways to do these things with awk/perl, etc. The first page of Google results brings back a bunch of non-working perl scripts.
When I need to pull a single table from an sql dump, I use a combination of grep, head and tail.
Eg:
grep -n "CREATE TABLE" dump.sql
This then gives you the line numbers for each one, so if your table is on line 200 and the one after is on line 269, I do:
head -n 268 dump.sql > tophalf.sql
tail -n 69 tophalf.sql > yourtable.sql
I would imagine you could extend upon those principles to knock up a script that would split the whole thing down into one file per table.
Anyone want a go doing it here?
Another bit that might help start a bash loop going:
grep -n "CREATE TABLE " dump.sql | tr ':`(' ' ' | awk '{print $1, $4}'
That gives you a nice list of line numbers and table names like:
200 FooTable
269 BarTable
Save yourself a lot of hassle and use mysqldump -T if you can.
From the documentation:
--tab=path, -T path
Produce tab-separated data files. For each dumped table, mysqldump
creates a tbl_name.sql file that contains the CREATE TABLE statement
that creates the table, and a tbl_name.txt file that contains its
data. The option value is the directory in which to write the files.
By default, the .txt data files are formatted using tab characters
between column values and a newline at the end of each line. The
format can be specified explicitly using the --fields-xxx and
--lines-terminated-by options.
Note This option should be used only when mysqldump is run on the
same machine as the mysqld server. You must have the FILE privilege,
and the server must have permission to write files in the directory
that you specify.
This shell script will grab the tables you want and pass them to splitted.sql.
It’s capable of understanding regular expressions as I’ve added a sed -r option.
Also MyDumpSplitter can split the dump into individual table dumps.
Maatkit seems quite appropriate for this with mk-parallel-dump and mk-parallel-restore.
I am a bit late on that one, but if it can help anyone, I had to split a huge SQL dump file in order to import the data to another Mysql server.
what I ended up doing was splitting the dump file using the system command.
split -l 1000 import.sql splited_file
The above will split the sql file every 1000 lines.
Hope this helps someone