I'm using mdb-tools on FreeBSD to convert a Microsoft Access DB to MySQL.
The script looks like this (to_mysql.sh):
#!/usr/local/bin/bash
echo "DROP TABLE IF EXISTS Student;"
mdb-schema -T Student $1 mysql
mdb-export -D '%Y-%m-%d %H:%M:%S' -I mysql $1 Student
And I'm using it like:
./to_mysql.sh accessDb.MDB > data.sql
The problem is that the GUID (the second column) in the mdb changes for all rows.
In the access DB one row looks like this:
|{D115266B-D5A3-4617-80F8-7B80EE3022DA}|2013-06-11 08.54.14|2015-12-17
14.57.01|2|2||||||0|111111-1111||Nameson|Name|||||3|||SA|0||||0|Gatan 2|222 22|1234 567
And when I convert it to MySQL using the script above it looks like this:
INSERT INTO `Student` (
`UsedFields`,`GUID`,`Changed`,`ChangedLesson`,`AccessInWebViewer`,`VisibleInWebViewer`
,`PasswordInWebViewer`,`Language`,`UserMan`,`SchoolID`,`Owner`,`DoNotExport`
,`Student`,`Category`,`LastName`,`FirstName`,`Signature`,`Sex`
,`Phone`,`SchoolType`,`Grade`,`EMail`,`Program`,`IgnoreLunch`
,`ExcludedTime`,`Individual timetable`,`Adress(TEXT) `,`Postnr(TEXT) `
,`Ort(TEXT) `
)
VALUES (
NULL,"{266bd115-d5a3-4617-f880-807b30eeda22}","2013-06-11 08:54:14"
,"2015-12-17 14:57:01",2,2,NULL,NULL,NULL,NULL,NULL,0,"111111-1111"
,NULL,"Nameson","Name ",NULL,NULL,NULL,NULL,"3",NULL,"SA"
,0,NULL,0,"Gatan 2","222 22","1234 567"
);
Everything is correct except the GUID column, it changes from:
{D115266B-D5A3-4617-80F8-7B80EE3022DA}
to
{266bd115-d5a3-4617-f880-807b30eeda22}
It looks like all the chars just reordering, but I have no idea why.
Does anyone know why and how I can prevent this?
Thank you!
seems like a byte order issue in mdbtools. for a workaround create a small sed script ''mdb_fixguids'', something like
#!/bin/sed -f
s/{\(....\)\(....\)-\(....-....-....-............\)}/{\2\1-\3}/g;
s/{\(........-....-....\)-\(..\)\(..\)-\(..\)\(..\)\(..\)\(..\)\(..\)\(..\)}/{\1-\3\2-\5\4\7\6\9\8}/g
put it into the path and use it in the conversion pipe, something like
./to_mysql.sh accessDb.MDB | mdb_fixguids > data.sql
BTW :) this is the first time I needed all the possible backrefs in sed
Related
I'm trying to update multiple rows in a DB using a small script.
I need to update the rows based on some specific user_ids which I have in a list on Linux machine.
#! /bin/bash
mysql -u user-ppassword db -e "update device set in_use=0 where user_id in ()";
As you see above, the user_ids are in a file, let's say /opt/test/user_ids_txt.
How can I import them into this command?
This really depends on the format of user_ids_txt. If we assume it just happens to be in the correct syntax for your SQL in statement, the following will work:
#! /bin/bash
mysql -u user-ppassword db -e "update device set in_use=0 where user_id in ($(< /opt/test/user_ids_txt))";
The bash interpreter will substitute in the contents of the file. This can be dangerous for SQL queries, so I would echo out the command on the terminal to make sure it is correct before implementing it. You should be able to preview your SQL query by simply running the following on the command line:
echo "update device set in_use=0 where user_id in ($(< /opt/test/user_ids_txt))"
If your file is not in the SQL in syntax you will need to edit it (or a copy of it) before running your query. I would recommend something like sed for this.
Example
Let's say your file /opt/test/user_ids_txt is just a list of user_ids in the format:
aaa
bbb
ccc
You can use sed to edit this into the correct SQL syntax:
sed 's/^/\'/g; s/$/\'/g; 2,$s/^/,/g' /opt/test/user_ids_txt
The output of this command will be:
'aaa'
,'bbb'
,'ccc'
If you look at this sed command, you will see 3 separate commands separated by semicolons. The individual commands translate to:
1: Add ' to the beginning of every line
2: Add ' to the end of every line
3: Add , to the beginning of every line but the first
Note: If your ID's are strictly numeric, you only need the third command.
This would make your SQL query translate to:
update device set in_use=0 where user_id in ('aaa'
,'bbb'
,'ccc')
Rather than make a temporary file to store this, I would use a bash variable, and simply plug that into the query like this:
#! /bin/bash
in_statement="$(sed 's/^/\'/g; s/$/\'/g; 2,$s/^/,/g' /opt/test/user_ids_txt)"
mysql -u user-ppassword db -e "update device set in_use=0 where user_id in (${in_statement})";
Basically I have the unenviable position of updating our entire system to stop using a certain table and instead use another one. I've already done this for all of our code, now I need to do it for all of our functions and procedures.
I know that I can get a list of the functions / procedures in a database as such:
SELECT * FROM INFORMATION_SCHEMA.ROUTINES
I also know that I can look at the code for an individual function / procedure as such:
SHOW CREATE FUNCTION function_name
SHOW CREATE PROCEDURE procedure_name
However, I don't want to have to look through each function and procedure one by one, as we have over 200 of them.
I'm wondering if there is anything like...
SELECT * FROM INFORMATION_SCHEMA.ROUTINES WHERE code_column_name LIKE '%search_string%'
There doesn't seem to be any column in INFORMATION_SCHEMA.ROUTINES that contains the code, but... is there a way to do this on a different table perhaps?
I would use UNIX grep for that.
If you output the results of show create function/procedure to a flat file on disk then run grep on it. Or turn it directly thru grep.
Here is one way to do it:
echo 'show create function foo' | mysql -h*host* -u*user* -p*pass* *schema* | grep *obsolete-tablename*
or dump the entire database to disk and then grep it.
mysqldump -h*host* -u*user* -p*pass* *schema --routines > mydump.sql
grep *obsolete-tablename mydump.sql
Try:
SELECT * FROM mysql.proc WHERE body LIKE '%search_string%'
You asked about MySQL but I will just mention that I use this for stored procedures in MS SQL Server:
SELECT object_name(id)
FROM syscomments
WHERE text LIKE '%wibble%'
The INFORMATION_SCHEMA.ROUTINES table (which you mention) also contains similar stuff, but the text is cut off after 4000 chars, so you find less matches; not very helpful.
I have created a script of hive queries mainly for features creation and scoring for cross sell project. Most of the queries are simple queries that do the data cleaning , transformation etc. I want to automate this process so that I can start with hive table as input and can output the final result into Hbase file . My question are :
What is the best way to do it ?
Can I simply create filename.sql or filename.hql and run it from shell using hive -f filename.sql
Is there something in hive like PL for SQL?
You can do it in multiple ways.
Like you can also use Hive CLI and its very ease to do such jobs.
You can write shell script in Linux or .bat in Windows.
In script you can simply go like below entries.
$HIVE_HOME/bin/hive -e 'select a.col from tab1 a';
or if you have file :
$HIVE_HOME/bin/hive -f /home/my/hive-script.sql
Make sure you have set $HIVE_HOME in your env.
Once you have tested and working fine you can put in cronjob for scheduling.
It is important to note that if you are using either of the technique, each of your queries must be separated by a semi colon i.e.
hive -e 'select * from tableA limit 10;select * from tableB limit 10'
In PowerShell, how do I execute my mysql script so that the results are piped into a csv file? The results of this script is just a small set of columns that I would like copied into a csv file.
I can have it go directly to the shell by doing:
mysql> source myscript.sql
And I have tried various little things like:
mysql> source myscript.sql > mysql.out
mysql> source myscript.sql > mysql.csv
in infinite variation, and I just get errors. My db connections is alright because I can do basic table queries from the command line etc... I haven't been able to find a solution on the web so far either...
Any help would be really appreciated!
You seem to not be running powershell, but the mysql command line tool (perhaps you started it in a powershell console though.)
Note also that the mysql command line tool cannot export directly to csv.
However, to redirect the output to a file just run
mysql mydb < myscript.sql >mysql.out
or e.g.
echo select * from mytable | mysql mydb >mysql.out
(and whatever arguments to mysql you need, like username, hostname)
Are you looking for SELECT INTO OUTFILE ? dev.mysql.com/doc/refman/5.1/en/select.html – Pekka 19 hours ago
Yep. Select into outfile worked! But to make sure you get column names you also need to do something like:
select *
from
(
select
a,
b,
c
)
Union ALL
(Select *
from actual)
i have a view and want to extract its data into a file that has create table statement as well data.
i know that mysqldump doesn't work on views.
Obviously, there isn't an automated way to generate the CREATE TABLE statement of a table that does not exist. So you basically have two options:
Create an actual table, dump it and remove it afterwards.
Write a lot of code to analyse the view and underlying tables and generate the appropriate SQL.
First option is not optimal at all but it's easy to implement:
CREATE TABLE my_table AS
SELECT *
FROM my_view
You can now dump the table with mysqldump. When you're done:
DROP TABLE my_table
Second option can be as optimal as you need but it can get pretty complicate and it depends a lot on your actual needs and tool availability. However, if performance is an issue you can combine both approaches in a quick and dirty trick:
CREATE TABLE my_table AS
SELECT *
FROM my_view
LIMIT 1;
SHOW CREATE TABLE my_table;
Now, you use your favourite language to read values from my_view and build the appropriate INSERT INTO code. Finally:
DROP TABLE my_table;
In any case, feel free to explain why you need to obtain SQL code from views and we may be able to find better solutions.
Use SELECT ... INTO OUTFILE to create a dump of the data.
I have written a bash function to export the "structure" and data of a VIEW without creating a full copy of the data. I tested it with MySQL 5.6 on a CentOS 7 server. It properly takes into account columns with JSON values and strings like "O'Mally", though you may need to tweak it further for other special cases.
For the sake of brevity, I did not make it robust in terms of error checks or anything else.
function export_data_from_view
{
local DB_HOST=$1
local SCHEMA=$2
local VIEW=$3
local TMP_TABLE_NAME="view_as_table_$RANDOM"
local SQL1="
create table $TMP_TABLE_NAME as
(select * from $VIEW where 1=0);
show create table $TMP_TABLE_NAME \G
"
# Create an empty table with the structure of all columns in the VIEW.
# Display the structure. Delete lines not needed.
local STRUCT=$(
mysql -h $DB_HOST -BANnq -e "$SQL1" $SCHEMA |
egrep -v "\*\*\*.* row \*\*\*|^${TMP_TABLE_NAME}$" |
sed "s/$TMP_TABLE_NAME/$VIEW/"
)
echo
echo "$STRUCT;"
echo
local SQL2="
select concat( 'quote( ', column_name, ' ),' )
from information_schema.columns
where table_schema = '$SCHEMA'
and table_name = '$VIEW'
order by ORDINAL_POSITION
"
local COL_LIST=$(mysql -h $DB_HOST -BANnq -e "$SQL2")
# Remove the last comma from COL_LIST.
local COL_LIST=${COL_LIST%,}
local SQL3="select $COL_LIST from $VIEW"
local INSERT_STR="insert into $VIEW values "
# Fix quoting issues to produce executeable INSERT statements.
# \x27 is the single quote.
# \x5C is the back slash.
mysql -h $DB_HOST -BANnq -e "$SQL3" $SCHEMA |
sed '
s/\t/,/g; # Change each TAB to a comma.
s/\x5C\x5C\x27/\x5C\x27/g; # Change each back-back-single-quote to a back-single-quote.
s/\x27NULL\x27/NULL/g; # Remove quotes from around real NULL values.
s/\x27\x27{/\x27{/g; # Remove extra quotes from the beginning of a JSON value.
s/}\x27\x27/}\x27/g; # Remove extra quotes from the end of a JSON value.
' |
awk -v insert="$INSERT_STR" '{print insert "( " $0 " );"}'
local SQL4="drop table if exists $TMP_TABLE_NAME"
mysql -h $DB_HOST -BANnq -e "$SQL4" $SCHEMA
echo
}