Grant: this works
I have the following puppet code:
mysql_grant {'my-user-name#1.2.3.4/my-database-name.*':
ensure => 'present',
options => ['GRANT'],
privileges => ['SELECT', 'INSERT', 'DELETE', 'UPDATE'],
table => 'my-database-name.*',
user => 'my-user-name#1.2.3.4',
}
and that does grant the permissions I expect.
Revoke: this doesn't work
If I change my mind and say this:
mysql_grant {'my-user-name#1.2.3.4/my-database-name.*':
ensure => 'absent',
options => ['GRANT'],
privileges => ['SELECT', 'INSERT', 'DELETE', 'UPDATE'],
table => 'my-database-name.*',
user => 'my-user-name#1.2.3.4',
}
I note that it doesn't revoke permission (not even if I change s/GRANT/REVOKE/). Any pointers on how to automate revocation? I haven't been able to find it in the manual or by googling.
Repeat: I'm lost without copy and paste
Now suppose I want to permit access from several hosts. My puppet-fu fails me on how to not repeat the block (i.e., just copy-paste with different IP addresses). I'm sure puppet defines tools for this, but I've not figured out that part yet.
Thanks for any pointers!
For the repeat part I can think of two ways:
puppetDB
hiera
PuppetDB
Whenever you want the fact of a node to do something on a second node, use puppetDB. This is called exported resources. This is also explained in the puppet-mysql documentation.
Example1: Add the SSH Hostkeys of all machines to the known_keys of all other machines.
Example2: Add all machines to monitoring, creating their own host definition.
Example3: On a certain class of machine, allow them to connect to MySQL.
In each case, you first install puppetDB via the puppet-puppetdb module. You will need puppet4 for this. PuppetDB will only start if you have 8+ GB of memory.
You then have to write the resource export and the resource import. On all nodes that have a fact that you want (like ip / fqdn), you write the export:
##mysql_grant {"my-user-name#${::ipaddress}/**my-database-name.*":
ensure => 'absent',
options => ['GRANT'],
privileges => ['SELECT', 'INSERT', 'DELETE', 'UPDATE'],
table => 'my-database-name.*',
user => "my-user-name#${::ipaddress}",
}
The '##' creates the export. Note that the exported resource is lower case. Also note the double quote instead of single quote whenever a variable is used.
What will happen whenever a node sees this, is that it will fill out this exported resource with its fact (in this case ::ipaddress), and send it to puppetDB. You can either add this part to all nodes you want to grant access, partially defeating its purpose, or you can have a manifest that is applied to all nodes and do something along the lines of:
if $::fqdn include? 'app'{
##mysql_grant {"my-user-name#${::ipaddress}/**my-database-name.*":
ensure => 'absent',
options => ['GRANT'],
privileges => ['SELECT', 'INSERT', 'DELETE', 'UPDATE'],
table => 'my-database-name.*',
user => "my-user-name#${::ipaddress}",
}
}
Then you need to write an import statement on the node that should apply this.
Mysql_grant <<| |>>
Please note the upper case.
Another quick example, which we apply to all our linux nodes:
# collect all the public host RSA keys for known hosts
##sshkey { $hostname:
ensure => present,
type => 'rsa',
host_aliases => [$::ipaddress, $::fqdn],
key => $sshrsakey,
}
# and populate known_hosts
Sshkey <<| |>>
#https://projects.puppetlabs.com/issues/21811
file { '/etc/ssh/ssh_known_hosts':
ensure => present,
path => '/etc/ssh/ssh_known_hosts',
mode => '0644',
}
Hiera
Hiera is build for exactly this purpose, to seperate code from data. Please refer to the hiera documentation for how to set it up.
What you end up doing is that you will create a yaml file that has all your data in it:
mysql::grants:
db1:
username: my-user-name
database: my-database-name
ip: 1.2.3.4
ensure: present
options:
- GRANT
privileges:
- SELECT
- INSERT
- DELETE
- UPDATE
table: my-database-name.*
db2:
username: my-user-name
database: my-database-name
ip: 1.2.3.5
ensure: present
options:
- GRANT
privileges:
- SELECT
- INSERT
- DELETE
- UPDATE
table: my-database-name.*
Then you just go ahead and put this in your mysql node (although creating a small module would be cleaner):
$grants = hiera('mysql::grants', undef)
create_resources('mysql::grant', $grants)
Puppet will parse all of hiera, then creating a grant for every db found.
Try using mysql_grant on a new user, then using puppet apply with the -d (debug) and -v (verbose) options on your manifest.
This should give you a load of output that shows what it's doing. What it will be doing is running sql commands on your database such as
grant all on db.* to user
These will also be shown when you run
show grants for user
Then change to 'absent', and repeat.
Now you know exactly what SQL commands puppet is running on your DB.
Then you can try those commands directly in the DB to see if they do what you expect.
Note: using ensure => 'absent' is the correct thing to do to remove permissions, changing grant to revoke won't help.
Related
All my previous projects used DatabaseCleaner, so I'm used to starting with an empty DB and creating test data within each test with FactoryGirl.
Currently, I'm working on a project that has a test database with many records. It is an sql file that all developers must import in their local test environments. The same DB is imported in the continuous integration server. I feel like having less control over the test data makes the testing process harder.
Some features allow their tests to focus on specific data, such as records that are associated to a certain user. In those cases, the preexisting data is irrelevant. Other features such as a report that displays projects of all clients do not allow me to "ignore" the preexisting data.
Is there any way to ignore the test DB contents in some tests (emulate an empty DB and create my own test data without actually deleting all rows in the test DB)? Maybe have two databases (both in the same MySQL server) and being able to switch between them (e.g., some tests use one DB, other tests use the other DB)?
Any other recommendations on how deal with to this scenario?
Thank you.
I would recommend preserving your test_database, and the 'test' environment as your 'clean' state. Then you could setup a separate database that you initially seed as your 'dirty' database. A before hook in your rails_helper file could also be setup with something like this:
RSpec.configure do |config|
config.before :each, type: :feature do |example|
if ENV['TEST_DIRTY'] || example.metadata[:test_dirty]
ActiveRecord::Base.establish_connection(
{
:adapter => 'mysql2',
:database => 'test_dirty',
:host => '127.0.0.1',
:username => 'root',
:password => 'password'
}
)
end
end
end
Your database.yml file will need configurations added for your 'dirty' database. But I think the key here is keeping your clean and dirty states separate. Cheers!
I have found that adding the following configuration to spec/rails_helper.rb will run all DB operations inside tests or before(:each) blocks as transactions, which are rolled back after each test is finished. That means we can do something like before(:each) { MyModel.delete_all }, create our own test data, run our assertions (which will only see the data we created) and after the end of the test, all preexisting data will still be in the DB because the deletion will be rolled back.
RSpec.configure do |config|
config.use_transactional_fixtures = true
end
It appears that since Asterisk 1.8 MySQL CDR storage is built-in (cdr_mysql.so is deprecated as is the Asterisk Add-ons). I have a cdr_mysql.conf configured (similar settings as in res_config_mysql.conf which works) and I have mySQL running and the cdr table created (and yes Asterisk can write to the tables). BUT, I get no CDRs in that table (I do get the Master.csv CDRs). What am I missing?
Suggestions?
In asterisk 11 cdr_mysql still selactable via
make menuconfig
It is depricated. Since cdr_odbc work same, i not see any issue in that.
You also need have cdr.conf file with
[general]
; Define whether or not to use CDR logging. Setting this to "no" will override
; any loading of backend CDR modules. Default is "yes".
enable=yes
And cdr_custom.conf with something like this
[mappings]
Master.csv => ${CSV_QUOTE(${CDR(clid)})},${CSV_QUOTE(${CDR(src)})},${CSV_QUOTE(${CDR(dst)})},${CSV_QUOTE(${CDR(dcontext)})},${CSV_QUOTE(${CDR(channel)})},${CSV_QUOTE(${CDR(dstchannel)})},${CSV_QUOTE(${CDR(lastapp)})},${CSV_QUOTE(${CDR(lastdata)})},${CSV_QUOTE(${CDR(start)})},${CSV_QUOTE(${CDR(answer)})},${CSV_QUOTE(${CDR(end)})},${CSV_QUOTE(${CDR(duration)})},${CSV_QUOTE(${CDR(billsec)})},${CSV_QUOTE(${CDR(disposition)})},${CSV_QUOTE(${CDR(amaflags)})},${CSV_QUOTE(${CDR(accountcode)})},${CSV_QUOTE(${CDR(uniqueid)})},${CSV_QUOTE(${CDR(userfield)})},${CDR(sequence)}
No ODBC! Just enable all Mysql (even if it depreceted) in make menuselect and run:
make clean & make & make install
make clean - is necessary!
In modules.conf write next:
load => app_db.so
load => app_cdr.so
load => app_mysql.so
load => cdr_csv.so
load => cdr_mysql.so
load => func_cdr.so
load => func_db.so
In cdr.conf
[general]
enable=yes
In cdr_mysql.conf - all for connect to MySQL.
after all this go to CLI and type cdr show status and looking for mysql! )) Try to use this command before)
CLI> cdr show status
Call Detail Record (CDR) settings
----------------------------------
Logging: Enabled
Mode: Simple
Log unanswered calls: Yes
Log congestion: No
* Registered Backends
-------------------
mysql
csv
As while using Laravel, we have an option to seed our database or create tables anytime, like
class UsersTableSeeder extends Seeder
{
public function run()
{
User::truncate();
User::create([
'username' => 'Junaid',
'email' => 'darulehsan03#gmail.com',
'password' => '1234'
]);
User::create([
'username' => 'Junaid Farooq',
'email' => 'aba#bcd.com',
'password' => '4321'
]);
}
}
we can seed our database anytime, but what if we have a large no of rows in our table, which are not being seeded but added by the users , then how can we put it that way, like a Seeder file so, anytime at anyplace , we can load all those rows through our Seeder File?
not asking about to save .SQL file and then import or Export it, But a way to backup them in a Seeder file
I don't know of any method of backing up the data to a seed file. You can, like you've already said, export and import your data.
There are also a couple of packages available to backup and restore a database.
laravel-backup, which seems to be a Laravel-specific package that allows you to backup your database and restore it.
database-backup, which is framework agnostic but does come with a Laravel service provider for easier integration with Laravel.
Both seem to allow you to backup and restore from Amazon S3. Having used neither I can't say which is better or why. You'll have to try both out and make that decision for yourself.
$dbh->selectrow_hashref("SHOW CREATE FUNCTION my_func");
returns
0 HASH(0x202fe70)
'Create Function' => undef
'Database Collation' => 'latin1_swedish_ci'
'Function' => 'my_func'
'character_set_client' => 'cp850'
'collation_connection' => 'cp850_general_ci'
'sql_mode' => ''
(the function definition is missing)
The same code works perfectly with SHOW CREATE VIEW, and SHOW CREATE FUNCTION works on the MySQL command line with the same credentials.
I wondered if the data type is too large for the attribute, so I tried setting LongReadLen to a very large number on the connect, but it made no difference.
I forgot that user#localhost is not the same as user#remote_host (MySQL bites me on this every time :-(
What's bizarre is that user#remote_host HAS the required privilege, while user#localhost does not (I'm in a shared hosting environment and don't have access to privileges on database mysql)
I'd like to dump my databases to a file.
Certain website hosts don't allow remote or command line access, so I have to do this using a series of queries.
All of the related questions say "use mysqldump" which is a great tool but I don't have command line access to this database.
I'd like CREATE and INSERT commands to be created at the same time - basically, the same performance as mysqldump. Is SELECT INTO OUTFILE the right road to travel, or is there something else I'm overlooking - or maybe it's not possible?
Use mysqldump-php a pure-PHP solution to replicate the function of the mysqldump executable for basic to med complexity use cases - I understand you may not have remote CLI and/or mysql direct access, but so long as you can execute via an HTTP request on a httpd on the host this will work:
So you should be able to just run the following purely PHP script straight from a secure-directory in /www/ and have an output file written there and grab it with a wget.
mysqldump-php - Pure PHP mysqldump on GitHub
PHP example:
<?php
require('database_connection.php');
require('mysql-dump.php')
$dumpSettings = array(
'include-tables' => array('table1', 'table2'),
'exclude-tables' => array('table3', 'table4'),
'compress' => CompressMethod::GZIP, /* CompressMethod::[GZIP, BZIP2, NONE] */
'no-data' => false,
'add-drop-table' => false,
'single-transaction' => true,
'lock-tables' => false,
'add-locks' => true,
'extended-insert' => true
);
$dump = new MySQLDump('database','database_user','database_pass','localhost', $dumpSettings);
$dump->start('forum_dump.sql.gz');
?>
With your hands tied by your host, you may have to take a rather extreme approach. Using any scripting option your host provides, you can achieve this with just a little difficulty. You can create a secure web page or strait text dump link known only to you and sufficiently secured to prevent all unauthorized access. The script to build the page/text contents could be written to follow these steps:
For each database you want to back up:
Step 1: Run SHOW TABLES.
Step 2: For each table name returned by the above query, run SHOW CREATE TABLE to get the create statement that you could run on another server to recreate the table and output the results to the web page. You may have to prepend "DROP TABLE X IF EXISTS;" before each create statement generated by the results of these queryies (!not in your query input!).
Step 3: For each table name returned from step 1 again, run a SELECT * query and capture full results. You will need to apply a bulk transformation to this query result before outputing to screen to convert each line into an INSERT INTO tblX statement and output the final transformed results to the web page/text file download.
The final web page/text download would have an output of all create statements with "drop table if exists" safeguards, and insert statements. Save the output to your own machine as a ".sql" file, and execute on any backup host as needed.
I'm sorry you have to go through with this. Note that preserving mysql user accounts that you need is something else entirely.
Use / Install PhpMySQLAdmin on your web server and click export. Many web hosts already offer you this as a service pre-configured, and it's easy to install if you don't already have it (pure php): http://www.phpmyadmin.net/
This allows you to export your database(s), as well as perform other otherwise tedious database operations very quickly and easily -- and it works for older versions of PHP < 5.3 (unlike the Mysqldump.php offered as another answer here).
I am aware that the question states 'using query' but I believe the point here is that any means necessary is sought when shell access is not available -- that is how I landed on this page, and PhpMyAdmin saved me!