I'm hoping that this may just be a case of me not being able to use Google correctly. I'm wanting to do something very specific using MaxScale where I intercept all of the queries going to my database, run explain plan on those queries first, and then reject the queries outright if the explain plan falls beyond a certain complexity threshold. I'm happy to learn how to write in whatever language is required (C, I think). But I can't seem to find any kind of API docs or examples based on my internet digging.
Here's my configuration file (no idea if it will even help). I'd love to post something I've already tried - but I don't even know what to try!
[maxscale]
threads=4
[MySQL Monitor]
type=monitor
module=mysqlmon
servers=master,slave1
user=dbuser
passwd=dbpswd
monitor_interval=10000
[qla]
type=filter
module=qlafilter
options=/tmp/QueryLog
[fetch]
type=filter
module=regexfilter
match=fetch
replace=select
[RW]
type=service
localhost_match_wildcard_host=1
router=readwritesplit
servers=master,slave1
user=dbuser
passwd=dbpswd
max_slave_connections=100%
router_options=slave_selection_criteria=LEAST_CURRENT_OPERATIONS
[RR]
type=service
localhost_match_wildcard_host=1
router=readconnroute
router_options=synced
servers=slave1
user=dbuser
passwd=dbpswd
[Debug Interface]
type=service
router=debugcli
[CLI]
type=service
router=cli
[RWlistener]
type=listener
service=RW
protocol=MySQLClient
address=127.0.0.1
port=3307
[RRlistener]
type=listener
service=RR
protocol=MySQLClient
address=127.0.0.2
port=3307
[Debug Listener]
type=listener
service=Debug Interface
protocol=telnetd
address=127.0.0.2
port=4442
[CLI Listener]
type=listener
service=CLI
protocol=maxscaled
address=127.0.0.2
port=6603
[master]
type=server
address=master.dns.address
port=3306
protocol=MySQLBackend
[slave1]
type=server
address=slave1.dns.address
port=3306
protocol=MySQLBackend
Related
I'm in a dead end at the configuration of snort.
In theory a simple problem.
I created a test rule to check if snort runs properly.
Location:\etc\snort\rules\local.rules
Content:
alert icmp any any -> $HOME_NET any (msg:"ICMP on fire"; sid:10000001; rev:001;)
Then I ran on terminal :
sudo snort -T -i enp0s3 -c /etc/snort/snort.conf
Message I receive at the end of the initialization:
"Snort successfully validated the configuration!"
"Snort exiting"
But scrolling up I'm seeing:
Initializing rule chains...
0 Snort rules read
0 detection rules
0 decoder rules
0 preprocessor rules
0 Option Chains linked into 0 Chain Headers
No rules at all!
location is correct in conf file under
/etc/snort/snort.conf
var RULE_PATH /etc/snort/rules
Snort 2.9.17 Build 199
Ubuntu 20.04
Any ideas?Thnnks in advance!
I would recommend supplying the rule path when you execute Snort using the "--rule-path" flag.
The --rule-path flag is not available and not recognized.
As far I understand this variable is just that, a variable that's not used anywhere in the configuration file.
The only way/workaround that I found was include the rule files for ex.
In the snort.conf appending this.
.
.
.
.
include c:\local.rules
Besides that, someone found a way to match content in answer/response?
I mean, let suppose that I want to check if the server has answer with a known content, for ex: success. I've tried with bidirectional operator <> and flow:to_client but nothing has worked.
I follow the basically solutions to solve it, but I already had the problem.
In my configure.ac file I have a check for mysql:
AC_CHECK_HEADER([mysql/mysql.h], ,AC_MSG_ERROR([Could not find mysql headers !]))
and of course it complain because, as explain here:
If the header files are installed in a nonstandard location, such as
/opt/include, and CPPFLAGS doesn't refer to that directory-for
example, as -I/opt/include-the AC_CHECK_HEADER macro will fail, even
though the files do exist on the system. However, this is an issue for
the system's administrator. Part of the convenience of autoconf is
that you, as the developer, don't need to worry about these details.
So, as developer, what's the way to go to solve it properly ?
I also put the path of real location in Makefile with -I/usr/include/mysql, but it continues to complain.
EDIT: as suggestd I post the configure.ac (the main parts):
useMysql=no
AC_MSG_CHECKING([whether to use mysql])
AC_ARG_ENABLE(mysql,
[ --enable-mysql Enable mysql support],
[MYSQL="$enableval"]
useMysql=yes,
[MYSQL="no"]
)
AC_MSG_RESULT([$MYSQL])
AC_SUBST([MYSQL])
[...]
if test "$MYSQL" = "yes"; then
AC_CHECKING([for MYSQL Library and Header files])
AC_CHECK_HEADER([mysql/mysql.h], ,AC_MSG_ERROR([Could not find mysql headers !]))
AC_CHECK_LIB(mysqlclient, mysql_init, [ MYSQL_LIBS="-lmysqlclient" ], [AC_MSG_ERROR([$PACKAGE_NAME requires but cannot find mysqlclient])])
AC_DEFINE(USE_MYSQL, 1, [Use MYSQL library])
AC_SUBST(MYSQL_LIBS)
fi
then I use the MYSQL_LIBS in the Makefile:
AM_CFLAGS = -g -fPIC -rdynamic -I$(top_srcdir)/include -I/usr/include/mysql
I am creating application in Ruby on Rails which is having many engines(for modularity).
I want different databases for each engine. How to configure this?
Database - MYSQL
There is a good explanation by the link http://www.blrice.net/blog/2016/04/09/one-rails-app-with-many-databases/
General approach is to take a look at the framework sources and decide can it be reused.
Let's take a look at activerecord/lib/active_record/railties/databases.rake (v5.0.7) first. For example on how db:create implemented.
We will see ActiveRecord::Tasks::DatabaseTasks.create_current.
Let's open ActiveRecord::Tasks::DatabaseTasks and take a look at
# The possible config values are:
#
# * +env+: current environment (like Rails.env).
# * +database_configuration+: configuration of your databases (as in +config/database.yml+).
# * +db_dir+: your +db+ directory.
# * +fixtures_path+: a path to fixtures directory.
# * +migrations_paths+: a list of paths to directories with migrations.
# * +seed_loader+: an object which will load seeds, it needs to respond to the +load_seed+ method.
# * +root+: a path to the root of the application.
#
# Example usage of DatabaseTasks outside Rails could look as such:
#
# include ActiveRecord::Tasks
# DatabaseTasks.database_configuration = YAML.load_file('my_database_config.yml')
# DatabaseTasks.db_dir = 'db'
# # other settings...
This way we are getting to following solution:
namespace :your_engine do
namespace :db do
task :load_config do
ActiveRecord::Tasks::DatabaseTasks.database_configuration = YAML.load_file("config/database_your_engine.yml")
ActiveRecord::Tasks::DatabaseTasks.db_dir = "db_your_engine"
ActiveRecord::Tasks::DatabaseTasks.migrations_paths = [ "components/your_engine/db/migrate" ]
ActiveRecord::Base.configurations = ActiveRecord::Tasks::DatabaseTasks.database_configuration
ActiveRecord::Migrator.migrations_paths = ActiveRecord::Tasks::DatabaseTasks.migrations_paths
# You can observe following values to see how settings applied.
# puts ActiveRecord::Base.configurations
# puts ActiveRecord::Migrator.migrations_paths
# puts ActiveRecord::Tasks::DatabaseTasks.database_configuration
# puts ActiveRecord::Tasks::DatabaseTasks.migrations_paths
end
desc "Create Your DB"
task create: :load_config do
ActiveRecord::Tasks::DatabaseTasks.create_current
end
end
end
The same approach for drop/migrate and other needed tasks.
It is good general rule - know you stack at least one level lower than your work with. Sometimes reading underlying sources much more helpful than direct answer.
I will update this answer while going forward with my solution...
We're using CentOS and would like to ban several Asian countries from accessing the entire server. Almost every IP we check which has tried to hack into our server is allocated to an Asian country (Russia, China, Pakistan, etc.)
We have an IP to country MySQL database we can efficiently query and would like to try something like:
-A INPUT -p tcp -m tcp --dport 80 -j /path/to/perlscript.pl
The script would need the IP passed in as an argument, then it would return either an ACCEPT or DROP target?
Thanks for the answers, here's my follow up.
Do you know if it is possible though? Having a rule point to a script which returns a target? (ACCPET/DROP)
Not entirely sure how ipset works, will have to experiment I guess, but it looks like it creates a single rule. How would it handle Russia for example, which has over 6000 ranges assigned to it? And we want to add probably 20 - 40 countries in total, so we could end up needing to add in excess of 100,000 ranges. Wouldn't the overhead of a single MySQL query be less taxing?
SELECT country FROM ip_countries WHERE $VAR{ip} >= range1 && $VAR{ip} <= range2
The database we use is freely available here : http://software77.net/geo-ip/
It represents IPs in the database by converting the IP to a number using this formula :
$VAR{numberedIP} = $octs[3] + ($octs[2] * 256) + ($octs[1] * 256 * 256) + ($octs[0] * 256 * 256 * 256);
It will store the start of the range in the "range1" column, and the end of the range in the "range2" column.
So you can see how we'd look up an IP using the above query. Literally takes less than a hundredth of a second to get a result and it's quite accurate. We have one website on a dedicated server, quite low traffic. But as with all servers I have ever checked, this one is hit daily by hackers' robots, checking email accounts, FTP accounts etc. And just about every web server I've ever worked on is compromised sooner or later. In our case, 99.99% of traffic from Asian countries has criminal intent attached to it.
We'd like this to run via iptables so that all ports are covered, not just HTTP for example by using directives in say .htaccess.
Do you think ipset would still be faster and more efficient?
It would be far too slow to launch perl for every matching packet. The right tool for this sort of thing is ipset, and there is much more information and documentation available on the ipset man page.
In CentOS you can install it with yum. Naturally, all of these commands and the script need to run as root:
# yum install ipset
Next install the kernel modules (you'll want this to happen at boot as well):
# modprobe -v ipset ip_set_hash_netport
And then use a script like the following to populate an ipset and block IP's from its ranges using iptables:
#!/usr/bin/env perl
use strict;
use warnings;
use DBI;
my $dbh = DBI->connect('... your DSN ...',...);
# I have no knowledge of your schema, but if you can pull the
# address range in the form: AA.BB.CC.DD/NN
my $ranges = $dbh->selectcol_arrayref(
q{SELECT cidr FROM your_table WHERE country_code IN ('CN',...)});
`ipset create geoblock hash:netport`;
for (#$ranges) {
# to match on port 80:
`ipset add geoblock $_,80`;
}
`iptables -I INPUT -m set --set geoblock src -j DROP`;
If you would like to block all ports rather than just 80, use the ip_set_hash_net module instead of ip_set_hash_netport, change hash:netport to hash:net, and remove ,80 from the ipset command.
Can I safely ignore these cmake compiler warnings?
I'm learning to compile packages from source and practicing on MySQL.
Should I be searching for and installing dev libraries when I see "notices" like this (referencing specific "not found" files):
$ cmake . -LA
...
-- Looking for include file cxxabi.h
-- Looking for include file cxxabi.h - not found.
-- Looking for include file dirent.h
-- Looking for include file dirent.h - found
-- Looking for include file dlfcn.h
-- Looking for include file dlfcn.h - found
And what should I do about notices referencing these "not found" messages:
-- Looking for bmove
-- Looking for bmove - not found
-- Looking for bsearch
-- Looking for bsearch - found
-- Looking for index
-- Looking for index - found
For example, cxxabi.h can be found in libstdc++6-4.7-dev on Debian. Do I need to install libstdc++6-4.7-dev to have a proper compile of MySQL?
I also have some (constant?) warnings that I'm unsure of:
-- Performing Test TIME_T_UNSIGNED
-- Performing Test TIME_T_UNSIGNED - Failed
-- Performing Test HAVE_GETADDRINFO
-- Performing Test HAVE_GETADDRINFO - Success
Overall, my build seems to work good, but I want to be sure.
If CMake configuration process doesn't fail, it means these headers are optional and there are workarounds in the MySQL code for these cases.
It might be also, that when some headers aren't present some features are silently turned off. It make sense to provide MySQL as many optional headers as you can.
Beware that some headers are OS-specific, so you can't and don't have to provide them.