original MySQl Tbl_driver
delimiter $$
CREATE TABLE `tbl_driver` (
`_id` int(11) NOT NULL AUTO_INCREMENT,
`Driver_Code` varchar(45) NOT NULL,
`Driver_Name` varchar(45) NOT NULL,
`AddBy_ID` int(11) NOT NULL,
PRIMARY KEY (`_id`)
) ENGINE=InnoDB AUTO_INCREMENT=4 DEFAULT CHARSET=latin1$$
mysql2sqlite.sh
#!/bin/sh
# Converts a mysqldump file into a Sqlite 3 compatible file. It also extracts the MySQL `KEY xxxxx` from the
# CREATE block and create them in separate commands _after_ all the INSERTs.
# Awk is choosen because it's fast and portable. You can use gawk, original awk or even the lightning fast mawk.
# The mysqldump file is traversed only once.
# Usage: $ ./mysql2sqlite mysqldump-opts db-name | sqlite3 database.sqlite
# Example: $ ./mysql2sqlite --no-data -u root -pMySecretPassWord myDbase | sqlite3 database.sqlite
# Thanks to and #artemyk and #gkuenning for their nice tweaks.
mysqldump --compatible=ansi --skip-extended-insert --compact "$#" | \
awk '
BEGIN {
FS=",$"
print "PRAGMA synchronous = OFF;"
print "PRAGMA journal_mode = MEMORY;"
print "BEGIN TRANSACTION;"
}
# CREATE TRIGGER statements have funny commenting. Remember we are in trigger.
/^\/\*.*CREATE.*TRIGGER/ {
gsub( /^.*TRIGGER/, "CREATE TRIGGER" )
print
inTrigger = 1
next
}
# The end of CREATE TRIGGER has a stray comment terminator
/END \*\/;;/ { gsub( /\*\//, "" ); print; inTrigger = 0; next }
# The rest of triggers just get passed through
inTrigger != 0 { print; next }
# Skip other comments
/^\/\*/ { next }
# Print all `INSERT` lines. The single quotes are protected by another single quote.
/INSERT/ {
gsub( /\\\047/, "\047\047" )
gsub(/\\n/, "\n")
gsub(/\\r/, "\r")
gsub(/\\"/, "\"")
gsub(/\\\\/, "\\")
gsub(/\\\032/, "\032")
print
next
}
# Print the `CREATE` line as is and capture the table name.
/^CREATE/ {
print
if ( match( $0, /\"[^\"]+/ ) ) tableName = substr( $0, RSTART+1, RLENGTH-1 )
}
# Replace `FULLTEXT KEY` or any other `XXXXX KEY` except PRIMARY by `KEY`
/^ [^"]+KEY/ && !/^ PRIMARY KEY/ { gsub( /.+KEY/, " KEY" ) }
# Get rid of field lengths in KEY lines
/ KEY/ { gsub(/\([0-9]+\)/, "") }
# Print all fields definition lines except the `KEY` lines.
/^ / && !/^( KEY|\);)/ {
gsub( /AUTO_INCREMENT|auto_increment/, "" )
gsub( /(CHARACTER SET|character set) [^ ]+ /, "" )
gsub( /DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP|default current_timestamp on update current_timestamp/, "" )
gsub( /(COLLATE|collate) [^ ]+ /, "" )
gsub(/(ENUM|enum)[^)]+\)/, "text ")
gsub(/(SET|set)\([^)]+\)/, "text ")
gsub(/UNSIGNED|unsigned/, "")
if (prev) print prev ","
prev = $1
}
# `KEY` lines are extracted from the `CREATE` block and stored in array for later print
# in a separate `CREATE KEY` command. The index name is prefixed by the table name to
# avoid a sqlite error for duplicate index name.
/^( KEY|\);)/ {
if (prev) print prev
prev=""
if ($0 == ");"){
print
} else {
if ( match( $0, /\"[^"]+/ ) ) indexName = substr( $0, RSTART+1, RLENGTH-1 )
if ( match( $0, /\([^()]+/ ) ) indexKey = substr( $0, RSTART+1, RLENGTH-1 )
key[tableName]=key[tableName] "CREATE INDEX \"" tableName "_" indexName "\" ON \"" tableName "\" (" indexKey ");\n"
}
}
# Print all `KEY` creation lines.
END {
for (table in key) printf key[table]
print "END TRANSACTION;"
}
'
exit 0
when execute this script, my sqlite database become like this
Sqlite Tbl_Driver
CREATE TABLE "tbl_driver" (
"_id" int(11) NOT NULL ,
"Driver_Code" varchar(45) NOT NULL,
"Driver_Name" varchar(45) NOT NULL,
"AddBy_ID" int(11) NOT NULL,
PRIMARY KEY ("_id")
)
i want to change "_id" int(11) NOT NULL ,
become like this "_id" int(11) NOT NULL PRIMARY KEY AUTO_INCREMENT,
or
become like this "_id" int(11) NOT NULL AUTO_INCREMENT,
with out primary key also can
any idea to modify this script?
The AUTO_INCREMENT keyword is specific to MySQL.
SQLite has a keyword AUTOINCREMENT (without the underscore) which means the column auto-generates monotonically increasing values that have never been used before in the table.
If you leave out the AUTOINCREMENT keyword (as the script you show does currently), SQLite assigns the ROWID to a new row, which means it will be a value 1 greater than the current greatest ROWID in the table. This could re-use values if you delete rows from the high end of the table and then insert new rows.
See http://www.sqlite.org/autoinc.html for more details.
If you want to modify this script to add the AUTOINCREMENT keyword, it looks like you could modify this line:
gsub( /AUTO_INCREMENT|auto_increment/, "" )
To this:
gsub( /AUTO_INCREMENT|auto_increment/, "AUTOINCREMENT" )
Re your comments:
Okay I tried it on a dummy table using sqlite3.
sqlite> create table foo (
i int autoincrement,
primary key (i)
);
Error: near "autoincrement": syntax error
Apparently SQLite requires that autoincrement follow a column-level primary key constraint. It's not happy with the MySQL convention of putting the pk constraint at the end, as a table-level constraint. That's supported by the syntax diagrams in the SQLite documentation for CREATE TABLE.
Let's try putting primary key before autoincrement.
sqlite> create table foo (
i int primary key autoincrement
);
Error: AUTOINCREMENT is only allowed on an INTEGER PRIMARY KEY
And apparently SQLite doesn't like "INT", it prefers "INTEGER":
sqlite> create table foo (
i integer primary key autoincrement
);
sqlite>
Success!
So your awk script is not able to translate MySQL table DDL into SQLite as easily as you thought it would.
Re your comments:
You're trying to duplicate the work of a Perl module called SQL::Translator, which is a lot of work. I'm not going to write a full working script for you.
To really solve this, and make a script that can automate all syntax changes to make the DDL compatible with SQLite, you would need to implement a full parser for SQL DDL. This is not practical to do in awk.
I recommend that you use your script for some of the cases of keyword substitution, and then if further changes are necessary, fix them by hand in a text editor.
Also consider making compromises. If it's too difficult to reformat the DDL to use the AUTOINCREMENT feature in SQLite, consider if the default ROWID functionality is close enough. Read the link I posted above to understand the differences.
I found a weird solution but it works with PHP Doctrine.
Create a Mysql database.
Create Doctrine 2 Entities From database, make up all consistences.
Doctrine 2 has a feature that compare the Entities to database and fix database to validate to entities.
Exporting the database by mysql2sqlite.sh does exactly what you describe.
so then you configure the doctrine driver to use the sqlite db and:
by composer:
vendor/bin/doctrine-module orm:schema-tool:update --force
It fix up the auto increment without need to do in hand.
Related
I recently wrote a python script to extract some data from a JSON file and use it to generate some SQL Insert values for the following statement:
INSERT INTO `card`(`artist`,`class_pid`,`collectible`,`cost`, `dbfid`, `api_db_id`, `name`, `rarity`, `cardset_pid`, `cardtype`, `attack`, `health`, `race`, `durability`, `armor`,`multiclassgroup`, `text`) VALUES ("generated entry goes here")
The names of some of the attributes are different in my SQL table but the same values are used (example cardClass in the JSON file/Python script is referred to as class_pid in the SQL table). The values generated from the script are valid SQL and can successfully be inserted into the database, however I noticed that in the resulting export.txt file some of the values changed from what they originally were. For example the following JSON entries from a utf-8 encoded JSON file:
[{"artist":"Arthur Bozonnet","attack":3,"cardClass":8,"collectible":1,"cost":2,"dbfId":2545,"flavor":"And he can't get up.","health":2,"id":"AT_003","mechanics":["HEROPOWER_DAMAGE"],"name":"Fallen Hero","rarity":"RARE","set":1,"text":"Your Hero Power deals 1 extra damage.","type":"MINION"},{"artist":"Ivan Fomin","attack":2,"cardClass":11,"collectible":1,"cost":2,"dbfId":54256,"flavor":"Were you expectorating another bad pun?","health":4,"id":"ULD_182","mechanics":["TRIGGER_VISUAL"],"name":"Spitting Camel","race":"BEAST","rarity":"COMMON","set":22,"text":"[x]At the end of your turn,\n deal 1 damage to another \nrandom friendly minion.","type":"MINION"}]
produce this output:
('Arthur Bozonnet',8,1,2,'2545','AT_003','Fallen Hero','RARE',1,'MINION',3,2,'NULL',0,0,'NULL','Your Hero Power deals 1\xa0extra damage.'),('Ivan Fomin',11,1,2,'54256','ULD_182','Spitting Camel','COMMON',22,'MINION',2,4,'BEAST',0,0,'NULL','[x]At the end of your turn,\n\xa0\xa0deal 1 damage to another\xa0\xa0\nrandom friendly minion.')
As you can see, some of the values from the JSON entries have been altered somehow as if the text encoding was changed somewhere, even though in my script I made sure that the JSON file was opened with utf-8 encoding and the resulting text file was also opened and written to in utf-8 to match the JSON file. My aim is to preserve the values exactly as they are in the JSON file and transfer those values to the generated SQL value entries exactly as they are in the JSON. As an example, in the generated SQL I want the "text" value of the second entry to be:
"[x]At the end of your turn,\n deal 1 damage to another \nrandom friendly minion."
instead of:
"[x]At the end of your turn,\n\xa0\xa0deal 1 damage to another\xa0\xa0\nrandom friendly minion."
I tried using functions such as unicodedata.normalize() but unfortunately it did not seem to change the output in any way.
This is the script that I wrote to generate the SQL values:
import json
import io
chosen_keys = ['artist','cardClass','collectible','cost',
'dbfId','id','name','rarity','set','type','attack','health',
'race','durability','armor',
'multiClassGroup','text']
defaults = ['NULL','0','0','0',
'NULL','NULL','NULL','NULL','0','NULL','0','0',
'NULL','0','0',
'NULL','NULL']
def saveChangesString(dataList, filename):
with io.open(filename, 'w', encoding='utf-8') as f:
f.write(dataList)
f.close()
def generateSQL(json_dict):
count = 0
endCount = 1
records = ""
finalState = ""
print('\n'+str(len(json_dict))+' records will be processed\n')
for i in json_dict:
entry = "("
jcount = 0
for j in chosen_keys:
if j in i.keys():
if str(i.get(j)).isdigit() and j != 'dbfId':
entry = entry + str(i.get(j))
else:
entry = entry + repr(str(i.get(j)))
else:
if str(defaults[jcount]).isdigit() and j != 'dbfId':
entry = entry + str(defaults[jcount])
else:
entry = entry + repr(str(defaults[jcount]))
if jcount != len(chosen_keys)-1:
entry = entry+","
jcount = jcount + 1
entry = entry + ")"
if count != len(json_dict)-1:
entry = entry+","
count = count + 1
if endCount % 100 == 0 and endCount >= 100 and endCount < len(json_dict):
print('processed records '+str(endCount - 99)+' - '+str(endCount))
if endCount + 100 > len(json_dict):
finalState = 'processed records '+str(endCount+1)+' - '+str(len(json_dict))
if endCount == len(json_dict):
print(finalState)
records = records + entry
endCount = endCount + 1
saveChangesString(records,'export.txt')
print('done')
with io.open('cards.collectible.sample.example.json', 'r', encoding='utf-8') as f:
json_to_dict = json.load(f)
f.close()
generateSQL(json_to_dict)
Any help would be greatly appreciated as the JSON file I am actually using contains over 2000 entries so I would prefer to avoid having to edit things manually. Thank you.
Also the SQL table structure code is:
-- phpMyAdmin SQL Dump
CREATE TABLE `card` (
`pid` int(10) NOT NULL,
`api_db_id` varchar(50) NOT NULL,
`dbfid` varchar(50) NOT NULL,
`name` varchar(50) NOT NULL,
`cardset_pid` int(10) NOT NULL,
`cardtype` varchar(50) NOT NULL,
`rarity` varchar(20) NOT NULL,
`cost` int(3) NOT NULL,
`attack` int(10) NOT NULL,
`health` int(10) NOT NULL,
`artist` varchar(50) NOT NULL,
`collectible` tinyint(1) NOT NULL,
`class_pid` int(10) NOT NULL,
`race` varchar(50) NOT NULL,
`durability` int(10) NOT NULL,
`armor` int(10) NOT NULL,
`multiclassgroup` varchar(50) NOT NULL,
`text` text NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
ALTER TABLE `card`
ADD PRIMARY KEY (`pid`);
ALTER TABLE `card`
MODIFY `pid` int(10) NOT NULL AUTO_INCREMENT, AUTO_INCREMENT=1;
COMMIT;
\xa0 is a variant on space. Is it coming from Word?
But, more relevant, it is not utf8; it is latin1 or other non-utf8 encoding. You need to go back to where it came from and change that to utf8.
Or, if your next step is just to put it into a MySQL table, then you need to tell the truth about the client -- namely that it is encoded in latin1 (not utf8). Once you have done that, MySQL will take care of the conversion for you during the INSERT.
user#host:~# mysql -V - mysql Ver 14.14 Distrib 5.7.25-28, for debian-linux-gnu (x86_64) using 7.0 running under debian-9,9
user#host:~# uname -a - Linux 4.9.0-8-amd64 #1 SMP Debian 4.9.144-3.1 (2019-02-19) x86_64 GNU/Linux
user#host:~# perl -MDBI -e 'print $DBI::VERSION ."\n";' - 1.636
user#host:~# perl -v This is perl 5, version 24, subversion 1 (v5.24.1) built for x86_64-linux-gnu-thread-multi
mysql> SHOW CREATE TABLE tbl1;
tbl1 | CREATE TABLE `tbl1` (
`id` bigint(20) NOT NULL AUTO_INCREMENT,
`main_id` bigint(20) NOT NULL DEFAULT '0',
`debet` varchar(255) NOT NULL DEFAULT '',
`kurs` double(20,4) NOT NULL DEFAULT '0.0000',
`summ` double(20,2) NOT NULL DEFAULT '0.00',
`is_sync` int(11) NOT NULL DEFAULT '0',
PRIMARY KEY (`id`),
KEY `main_id` (`main_id`)
) ENGINE=InnoDB AUTO_INCREMENT=70013000018275 DEFAULT CHARSET=utf8
mysql> SELECT * FROM tbl1 WHERE id=70003020040132;
-+---------------+----------------+-------+--------+---------+---------+
| id | main_id | debet | kurs | summ | is_sync |
+----------------+----------------+-------+--------+---------+---------+
| 70003020040132 | 70003020038511 | | 0.0000 | 1798.00 | 0 |
+----------------+----------------+-------+--------+---------+---------+
But when I get this data by perl::DBI module I lose precisions, and values 0.0000 and 1798.00 becomes 0 and 1798.
Code is next:
####
#These 3 subs are connecting to DB, executing query and get data by fetchall_arrayref and coverting undef to NULL.
####
sub DB_connect {
# DataBase Handler
my $dbh = DBI->connect("DBI:mysql:$DBNAME", $DBUSER, $DBPWD,{RaiseError => 0, PrintError => 0, mysql_enable_utf8 => 1}) or die "Error connecting to database: $DBI::errstr";
return $dbh;
}
sub DB_executeQuery {
# Executes SQL query. Return reference to array, or array, according to argv[0]
# argv[0] - "A" returns array, "R" - reference to array
# argv[1] - DB handler from DB_connect
# argv[2] - query to execute
my $choice=shift #_;
my $dbh=shift #_;
my $query=shift #_;
print "$query\n" if $DEBUG>2;
my $sth=$dbh->prepare($query) or die "Error preparing $query for execution: $DBI::errstr";
$sth->execute;
my $retval = $sth->fetchall_arrayref;
if ($choice eq "A" ) {
my #ret_arr=();
foreach my $value (#{ $retval }) {
push #ret_arr,#{ $value };
}
return #ret_arr;
}
elsif ($choice eq "R") {
return $retval;
}
}
sub undef2null {
# argv[1] - reference ro array of values where undef
# values has to be changed to NULL
# Returns array of prepared values: (...) (...) ...
my $ref=shift #_;
my #array=();
foreach my $row (#{ $ref }) {
my $str="";
foreach my $val ( #{ $row} ) {
if (! defined ( $val )) {
$str="$str, NULL";
}
else {
# Escape quotes and other symbols listed in square brackets
$val =~ s/([\"\'])/\\$1/g;
$str="$str, \'$val\'";
}
}
# Remove ', ' at the beginning of each VALUES substring
$str=substr($str,2);
push #array,"($str)";
} # End foreach my $row (#{ $ref_values })
return #array;
} # End undef2null
#### Main call
#...
# Somewhere in code I get data from DB and print it to out file
my #arr_values=();
my #arr_col_names=DB_executeQuery("A",$dbh,qq(SELECT column_name FROM `information_schema`.`columns` WHERE `table_schema` = '$DBNAME' AND `table_name` = '#{ $table }'));
#arr_ids=DB_executeQuery("A",$dbh,qq(SELECT `id` FROM `#{ $table }` WHERE `is_sync`=0));
my $ref_values=DB_executeQuery("R",$dbh,"SELECT * FROM \`#{ $table }\` WHERE \`id\` IN(".join(",",#arr_ids).")");
#arr_values=undef2null($ref_values);
print FOUT "REPLACE INTO \`#{ $table }\` (`".join("`, `",#arr_col_names)."`) VALUES ".(join ", ",#arr_values).";\n";
and as a result I get next string:
REPLACE INTO `pko_plat` (`id`, `main_id`, `debet`, `kurs`, `summ`, `is_sync`) VALUES ('70003020040132', '70003020038511', '', '0', '1798', '0')
in DB it was 0.0000 - become 0, was 1798.00, become 1798
Perl's DBI documentation says it gets data 'as is' into strings, no translations made. But, then, who rounded values?
The rounding you see is happening because of the way you create the columns.
`kurs` double(20,4) NOT NULL DEFAULT '0.0000'
`summ` double(20,2) NOT NULL DEFAULT '0.00'
If you look at the mysql floating point type documentation you will see that you are using the non-standard syntax double(m, d), where the two parameters define how the float is being output.
So in your case the values stored in summ will be displayed with 2 digits behind the point. This means that when perl gets a value out of the table which is 1.0001 in the database, the value that perl gets delivered by the database is rounded to the set number of digits (in this case .00).
Perl in turn interprets this value ("1.00") as a float, and when printed, will not show any trailing zeroes. If you want these you should accommodate for this in your output.
For example: print sprintf("%.2f\n", $summ);
The way I see it you have two ways you can go (if you want to avoid this loss of precision):
Only add numbers with the correct precision to the database (so for 'summ' only two trailing digits, and four for 'kurs'.)
Alter your table creation to the standard syntax for floats and determine the output formatting in Perl (which you will be doing either way):
`kurs` double() NOT NULL DEFAULT '0.0'
I am having problem inserting data into MySql table. For simplicity, my database has 2 tables, foo & foo2.
Table foo2 has two records
id=1, code="N", desc="Normal"
id=2, code="D", desc="Deviate"
I want to populate foo but I need to reference foo2 in doing so. My current code is:
$inputarray = array(
array("ONE", "Proj 1", "N"),
array("TWO", "Proj 2", "D"));
for ($i = 0; $i < count($inputarray); $i++) {
$sql3 = $pdo->prepare("INSERT INTO foo (var1, var2, var3)
VALUES ('{$inputarray[$i][0]}'
,'{$inputarray[$i][1]}'
, (select id from foo2 where code='($inputarray[$i][3])')
)");
$sql3->execute();}`
The "select id .." line generates an SQL error message but if I hard code it like
(select id from foo2 where code='N')
then the program runs without error. I have tried escape characters, using double quotes inside the single quotes etc. How can i best get around this problem?
The code to create foo was
$sql2 = $pdo->prepare('
CREATE TABLE foo(
id INT NOT NULL AUTO_INCREMENT,
var1 VARCHAR(3) NOT NULL UNIQUE,
var2 VARCHAR(32) NOT NULL,
var3 INT NOT NULL,
PRIMARY KEY (id),
FOREIGN KEY (var3) REFERENCES foo2 (id)
ON DELETE RESTRICT
ON UPDATE RESTRICT) ENGINE=INNODB');
This is not the way to use prepared statements
$pdo->prepare("INSERT INTO foo (var1, var2, var3)
VALUES ('{$inputarray[$i][0]}'
,'{$inputarray[$i][1]}'
, (select id from foo2 where code='($inputarray[$i][3])')
)");
This is plain old string concatenation, for all practical purposes you might as well use mysql_* functions here. The proper way to use PDO is like this:
$pdo->prepare("INSERT INTO foo (var1, var2, var3)
VALUES (?,?, (select id from foo2 where code=?))");
$pdo->bindParam(1, $inputarray[$i][0])
$pdo->bindParam(2, $inputarray[$i][1])
$pdo->bindParam(3, $inputarray[$i][3])
Note how much more readable your code has become? You can avoid the repetitve calls to bindParam by directly passing in the parameters to execute.
ps: To make your current code work {$inputarray[$i][3]} note the newly added curly brackets
I need to design the structure of a table that is going to store a log of events/actions for a project management website.
The problem is, these logs will be spelled differently depending on what the user is viewing
Example:
On the overview, an action could say "John F. deleted the item #2881"
On the single-item page, it would say "John F. deleted this item"
If the current user IS John F. it would spell "You deleted this item"
I'm not sure if I should store each different possibility in the table, this doesn't sound like the optimal approach.
For any kind of logs you can use the following table structure
CREATE TABLE logs (
id bigint NOT NULL AUTO_INCREMENT,
auto_id bigint NOT NULL DEFAULT '0',
table_name varchar(100) NOT NULL,
updated_at datetime DEFAULT NULL,
updated_by bigint NOT NULL DEFAULT '0',
updated_by_name varchar(100) DEFAULT NULL,
PRIMARY KEY (id)
) ENGINE=InnoDB AUTO_INCREMENT=3870 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci
And then create another table to record what columns were updated exactly.
CREATE TABLE logs_entries (
id bigint NOT NULL AUTO_INCREMENT,
log_id bigint NOT NULL DEFAULT '0',
field_name varchar(100) NOT NULL,
old_value text,
new_value text,
PRIMARY KEY (id),
KEY log_id (log_id),
CONSTRAINT logs_entries_ibfk_1 FOREIGN KEY (log_id) REFERENCES logs (id) ON DELETE CASCADE ON UPDATE CASCADE
) ENGINE=InnoDB AUTO_INCREMENT=7212 DEFAULT CHARSET=utf8mb3
Now your table data look like this
Now Create a Database-View to fetch data easily in simple query
DELIMITER $$
ALTER ALGORITHM=UNDEFINED SQL SECURITY DEFINER VIEW view_logs_entries AS
SELECT
le.id AS id,
le.log_id AS log_id,
le.field_name AS field_name,
le.old_value AS old_value,
le.new_value AS new_value,
l.auto_id AS auto_id,
l.table_name AS table_name,
l.updated_at AS updated_at,
l.updated_by AS updated_by,
l.updated_by_name AS updated_by_name
FROM (logs_entries le
LEFT JOIN logs l
ON ((le.log_id = l.id)))$$
DELIMITER ;
After creating database-view your data will look like this, so now you can easily query any logs of your project
You must have noticed that I have added updated_by and updated_by_name columns in logs table, Now there are two ways to fill this updated_by_name column
You can write a query on every log entry to fetch user name and store it(not recommended)
You can make a database-trigger to fill this column (Recommended)
You can create database trigger like this, it will automatically insert user name whenever logs entry added in database.
DELIMITER $$
USE YOUR_DATABASE_NAME$$
CREATE
TRIGGER logs_before_insert BEFORE INSERT ON logs
FOR EACH ROW BEGIN
SET new.updated_by_name= (SELECT fullname FROM users WHERE user_id = new.updated_by);
END;
$$
DELIMITER ;
After doing all this, you can insert entries in logs table whenever you do changes in any database table. In my case I have PHP-CodeIgniter project, and I did it like this in my Model file, here I am updating patient table of my database
public function update($id,$data)
{
// Log this activity
$auto_id = $id;
$table_name = 'patients';
$updated_by = #$data['updated_by'];
$new_record = $data;
$old_record = $this->db->where('id',$auto_id)->get($table_name)->row();
$data['updated_at'] = date('Y-m-d H:i:s');
$this->db->where('id', $id);
$return = $this->db->update($table_name,$data);
//my_var_dump($this->db->last_query());
if($updated_by)
{
$this->log_model->set_log($auto_id,$table_name,$updated_by,$new_record,$old_record);
}
return $return;
}
set_log function code checks which fields are actually changed
public function set_log($auto_id,$table_name,$updated_by,$new_record,$old_record)
{
$entries = [];
foreach ($new_record as $key => $value)
{
if($old_record->$key != $new_record[$key])
{
$entries[$key]['old_value'] = $old_record->$key;
$entries[$key]['new_value'] = $new_record[$key];
}
}
if(count($entries))
{
$data['auto_id'] = $auto_id;
$data['table_name'] = $table_name;
$data['updated_by'] = $updated_by;
$this->insert($data,$entries);
}
}
and insert log function look like this
public function insert($data,$entries)
{
$data['updated_at'] = date('Y-m-d H:i:s');
if($this->db->insert('logs', $data))
{
$id = $this->db->insert_id();
foreach ($entries as $key => $value)
{
$entry['log_id'] = $id;
$entry['field_name'] = $key;
$entry['old_value'] = $value['old_value'];
$entry['new_value'] = $value['new_value'];
$this->db->insert('logs_entries',$entry);
}
return $id;
}
return false;
}
You need to separate data from display. In your log, store the complete information (user xxx deleted item 2881). In your log viewer, you have the luxury of substituting as needed to make it more readable.
I have a php code which parse XML files and store the parsed information in my database(MySQL) on my Linux server.
all my tables collation is 'utf8_unicode_ci' even my database collation is 'utf8_unicode_ci'
I have a lot of languages in my XML files (Turkish, Swedish, french, ...) and I want the special characters in these languages to be stored in their original form but my problem is that an XML pattern like this :
<match date="24.08.2012" time="17:00" status="FT" venue="Stadiumi Loro Boriçi (Shkodër)" venue_id="1557" static_id="1305963" id="1254357">
the venue value will be stored in my database like this:
Stadiumi Loro Boriçi (Shkodër)
can anybody help me to store the value in my database as it is ???
Try to use "SET NAMES utf8" command before the query. As the manual says:
SET NAMES indicates what character set the client will use to send SQL statements to the server.
I tried to imitate your case. Created a table test1:
DROP TABLE IF EXISTS `test1`;
CREATE TABLE `test1` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`value` varchar(20) CHARACTER SET utf8 COLLATE utf8_unicode_ci NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8;
And tried to insert a string with French special characters and Chinese. For me it worked ok even without SET NAMES.
<?php
$str = "çëÙùÛûÜüÔô汉语漢語";
try {
$conn = new PDO('mysql:host=localhost;dbname=test', 'user', '');
$conn->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
$conn->exec("SET NAMES utf8");
$stmt = $conn->prepare('INSERT INTO test1 SET value = :value');
$stmt->execute(array('value' => $str));
//select the inserted row
$stmt = $conn->prepare('SELECT * FROM test1 WHERE id = :id');
$stmt->execute(array('id' => 1));
while($row = $stmt->fetch(PDO::FETCH_ASSOC)) {
print_r($row);
}
}
catch(PDOException $e) {
echo 'ERROR: ' . $e->getMessage();
}
It worked correctly, printing this:
Array
(
[id] => 1
[value] => çëÙùÛûÜüÔô汉语漢語
)
Also when you are testing don't read directly from console, redirect the output into a file and open it with a text editor:
php test.php > 1.txt