Script to Run through each Bugzilla Bug and Select 'Save Changes' - mysql

I need to run through each BugZilla bug individually and select the Save Changes button due to a previous problem. As there are over 8000 bugs it would not be possible to do this manually. I think the Perl Module WWW::Bugzilla will let me complete this by connecting to the bugzilla API.
use strict;
use warnings;
use WWW:Bugzilla;
my $bz = WWW::Bugzilla->new( server => 'http://be-qa-01/',
email => 'test.email#test.com',
password => 'pass'
bug_number => 8333 );
# show me the chosen component
my $component = $bz->component;
I'm, not sure how to go about this and would be grateful of any help.

Related

Laravel seed database from existing database

i have now new structure of my database, but i need to import the old data in the new format. For that reason i want to use the Laravel seeder, but i need somehow to connect to the old database and make select queries and to tell the seeder how to put the data in the new database.
Is that possible ?
Try:
Examples:
php artisan iseed my_table
php artisan iseed my_table,another_table
Visit: https://github.com/orangehill/iseed
Configure your laravel app to use two mysql connections (How to use multiple database in Laravel), one for the new database, the other for the old one.
I'll fake it like old and new.
In your seeds read from the old database and write into the new.
$old_user = DB::connection('old')->table('users')->get();
foreach ($old_users as $user) {
DB::connection('new')->table('users')->insert([
'name' => $user->name,
'email' => $user->email,
'password' => $user->password,
'old_id' -> $user->id
// ...
]);
}
Make sure to add messages while seeding like $this->command->info('Users table seeded'); or even a progress bar (you can access command line methods) to know at which point of the import you are.
Download package from
Git repo : https://github.com/orangehill/iseed
then update below file src/Orangehill/Iseed/IseedCommand.php
Add below code at line number 75
// update package script
if($this->argument('tables') === null){
$tables = Schema::getConnection()->getDoctrineSchemaManager()->listTableNames();
}
and update getArguments method in same file with below code
array('tables', InputArgument::OPTIONAL, 'comma separated string of table names'),
and then run php artisan iseed so it will get all the tables from your existing db and start creating seeders for all tables

Puppet and mysql: revoke and repeat

Grant: this works
I have the following puppet code:
mysql_grant {'my-user-name#1.2.3.4/my-database-name.*':
ensure => 'present',
options => ['GRANT'],
privileges => ['SELECT', 'INSERT', 'DELETE', 'UPDATE'],
table => 'my-database-name.*',
user => 'my-user-name#1.2.3.4',
}
and that does grant the permissions I expect.
Revoke: this doesn't work
If I change my mind and say this:
mysql_grant {'my-user-name#1.2.3.4/my-database-name.*':
ensure => 'absent',
options => ['GRANT'],
privileges => ['SELECT', 'INSERT', 'DELETE', 'UPDATE'],
table => 'my-database-name.*',
user => 'my-user-name#1.2.3.4',
}
I note that it doesn't revoke permission (not even if I change s/GRANT/REVOKE/). Any pointers on how to automate revocation? I haven't been able to find it in the manual or by googling.
Repeat: I'm lost without copy and paste
Now suppose I want to permit access from several hosts. My puppet-fu fails me on how to not repeat the block (i.e., just copy-paste with different IP addresses). I'm sure puppet defines tools for this, but I've not figured out that part yet.
Thanks for any pointers!
For the repeat part I can think of two ways:
puppetDB
hiera
PuppetDB
Whenever you want the fact of a node to do something on a second node, use puppetDB. This is called exported resources. This is also explained in the puppet-mysql documentation.
Example1: Add the SSH Hostkeys of all machines to the known_keys of all other machines.
Example2: Add all machines to monitoring, creating their own host definition.
Example3: On a certain class of machine, allow them to connect to MySQL.
In each case, you first install puppetDB via the puppet-puppetdb module. You will need puppet4 for this. PuppetDB will only start if you have 8+ GB of memory.
You then have to write the resource export and the resource import. On all nodes that have a fact that you want (like ip / fqdn), you write the export:
##mysql_grant {"my-user-name#${::ipaddress}/**my-database-name.*":
ensure => 'absent',
options => ['GRANT'],
privileges => ['SELECT', 'INSERT', 'DELETE', 'UPDATE'],
table => 'my-database-name.*',
user => "my-user-name#${::ipaddress}",
}
The '##' creates the export. Note that the exported resource is lower case. Also note the double quote instead of single quote whenever a variable is used.
What will happen whenever a node sees this, is that it will fill out this exported resource with its fact (in this case ::ipaddress), and send it to puppetDB. You can either add this part to all nodes you want to grant access, partially defeating its purpose, or you can have a manifest that is applied to all nodes and do something along the lines of:
if $::fqdn include? 'app'{
##mysql_grant {"my-user-name#${::ipaddress}/**my-database-name.*":
ensure => 'absent',
options => ['GRANT'],
privileges => ['SELECT', 'INSERT', 'DELETE', 'UPDATE'],
table => 'my-database-name.*',
user => "my-user-name#${::ipaddress}",
}
}
Then you need to write an import statement on the node that should apply this.
Mysql_grant <<| |>>
Please note the upper case.
Another quick example, which we apply to all our linux nodes:
# collect all the public host RSA keys for known hosts
##sshkey { $hostname:
ensure => present,
type => 'rsa',
host_aliases => [$::ipaddress, $::fqdn],
key => $sshrsakey,
}
# and populate known_hosts
Sshkey <<| |>>
#https://projects.puppetlabs.com/issues/21811
file { '/etc/ssh/ssh_known_hosts':
ensure => present,
path => '/etc/ssh/ssh_known_hosts',
mode => '0644',
}
Hiera
Hiera is build for exactly this purpose, to seperate code from data. Please refer to the hiera documentation for how to set it up.
What you end up doing is that you will create a yaml file that has all your data in it:
mysql::grants:
db1:
username: my-user-name
database: my-database-name
ip: 1.2.3.4
ensure: present
options:
- GRANT
privileges:
- SELECT
- INSERT
- DELETE
- UPDATE
table: my-database-name.*
db2:
username: my-user-name
database: my-database-name
ip: 1.2.3.5
ensure: present
options:
- GRANT
privileges:
- SELECT
- INSERT
- DELETE
- UPDATE
table: my-database-name.*
Then you just go ahead and put this in your mysql node (although creating a small module would be cleaner):
$grants = hiera('mysql::grants', undef)
create_resources('mysql::grant', $grants)
Puppet will parse all of hiera, then creating a grant for every db found.
Try using mysql_grant on a new user, then using puppet apply with the -d (debug) and -v (verbose) options on your manifest.
This should give you a load of output that shows what it's doing. What it will be doing is running sql commands on your database such as
grant all on db.* to user
These will also be shown when you run
show grants for user
Then change to 'absent', and repeat.
Now you know exactly what SQL commands puppet is running on your DB.
Then you can try those commands directly in the DB to see if they do what you expect.
Note: using ensure => 'absent' is the correct thing to do to remove permissions, changing grant to revoke won't help.

MySQL CDRs in Asterisk 11?

It appears that since Asterisk 1.8 MySQL CDR storage is built-in (cdr_mysql.so is deprecated as is the Asterisk Add-ons). I have a cdr_mysql.conf configured (similar settings as in res_config_mysql.conf which works) and I have mySQL running and the cdr table created (and yes Asterisk can write to the tables). BUT, I get no CDRs in that table (I do get the Master.csv CDRs). What am I missing?
Suggestions?
In asterisk 11 cdr_mysql still selactable via
make menuconfig
It is depricated. Since cdr_odbc work same, i not see any issue in that.
You also need have cdr.conf file with
[general]
; Define whether or not to use CDR logging. Setting this to "no" will override
; any loading of backend CDR modules. Default is "yes".
enable=yes
And cdr_custom.conf with something like this
[mappings]
Master.csv => ${CSV_QUOTE(${CDR(clid)})},${CSV_QUOTE(${CDR(src)})},${CSV_QUOTE(${CDR(dst)})},${CSV_QUOTE(${CDR(dcontext)})},${CSV_QUOTE(${CDR(channel)})},${CSV_QUOTE(${CDR(dstchannel)})},${CSV_QUOTE(${CDR(lastapp)})},${CSV_QUOTE(${CDR(lastdata)})},${CSV_QUOTE(${CDR(start)})},${CSV_QUOTE(${CDR(answer)})},${CSV_QUOTE(${CDR(end)})},${CSV_QUOTE(${CDR(duration)})},${CSV_QUOTE(${CDR(billsec)})},${CSV_QUOTE(${CDR(disposition)})},${CSV_QUOTE(${CDR(amaflags)})},${CSV_QUOTE(${CDR(accountcode)})},${CSV_QUOTE(${CDR(uniqueid)})},${CSV_QUOTE(${CDR(userfield)})},${CDR(sequence)}
No ODBC! Just enable all Mysql (even if it depreceted) in make menuselect and run:
make clean & make & make install
make clean - is necessary!
In modules.conf write next:
load => app_db.so
load => app_cdr.so
load => app_mysql.so
load => cdr_csv.so
load => cdr_mysql.so
load => func_cdr.so
load => func_db.so
In cdr.conf
[general]
enable=yes
In cdr_mysql.conf - all for connect to MySQL.
after all this go to CLI and type cdr show status and looking for mysql! )) Try to use this command before)
CLI> cdr show status
Call Detail Record (CDR) settings
----------------------------------
Logging: Enabled
Mode: Simple
Log unanswered calls: Yes
Log congestion: No
* Registered Backends
-------------------
mysql
csv

Backup database(s) using query without using mysqldump

I'd like to dump my databases to a file.
Certain website hosts don't allow remote or command line access, so I have to do this using a series of queries.
All of the related questions say "use mysqldump" which is a great tool but I don't have command line access to this database.
I'd like CREATE and INSERT commands to be created at the same time - basically, the same performance as mysqldump. Is SELECT INTO OUTFILE the right road to travel, or is there something else I'm overlooking - or maybe it's not possible?
Use mysqldump-php a pure-PHP solution to replicate the function of the mysqldump executable for basic to med complexity use cases - I understand you may not have remote CLI and/or mysql direct access, but so long as you can execute via an HTTP request on a httpd on the host this will work:
So you should be able to just run the following purely PHP script straight from a secure-directory in /www/ and have an output file written there and grab it with a wget.
mysqldump-php - Pure PHP mysqldump on GitHub
PHP example:
<?php
require('database_connection.php');
require('mysql-dump.php')
$dumpSettings = array(
'include-tables' => array('table1', 'table2'),
'exclude-tables' => array('table3', 'table4'),
'compress' => CompressMethod::GZIP, /* CompressMethod::[GZIP, BZIP2, NONE] */
'no-data' => false,
'add-drop-table' => false,
'single-transaction' => true,
'lock-tables' => false,
'add-locks' => true,
'extended-insert' => true
);
$dump = new MySQLDump('database','database_user','database_pass','localhost', $dumpSettings);
$dump->start('forum_dump.sql.gz');
?>
With your hands tied by your host, you may have to take a rather extreme approach. Using any scripting option your host provides, you can achieve this with just a little difficulty. You can create a secure web page or strait text dump link known only to you and sufficiently secured to prevent all unauthorized access. The script to build the page/text contents could be written to follow these steps:
For each database you want to back up:
Step 1: Run SHOW TABLES.
Step 2: For each table name returned by the above query, run SHOW CREATE TABLE to get the create statement that you could run on another server to recreate the table and output the results to the web page. You may have to prepend "DROP TABLE X IF EXISTS;" before each create statement generated by the results of these queryies (!not in your query input!).
Step 3: For each table name returned from step 1 again, run a SELECT * query and capture full results. You will need to apply a bulk transformation to this query result before outputing to screen to convert each line into an INSERT INTO tblX statement and output the final transformed results to the web page/text file download.
The final web page/text download would have an output of all create statements with "drop table if exists" safeguards, and insert statements. Save the output to your own machine as a ".sql" file, and execute on any backup host as needed.
I'm sorry you have to go through with this. Note that preserving mysql user accounts that you need is something else entirely.
Use / Install PhpMySQLAdmin on your web server and click export. Many web hosts already offer you this as a service pre-configured, and it's easy to install if you don't already have it (pure php): http://www.phpmyadmin.net/
This allows you to export your database(s), as well as perform other otherwise tedious database operations very quickly and easily -- and it works for older versions of PHP < 5.3 (unlike the Mysqldump.php offered as another answer here).
I am aware that the question states 'using query' but I believe the point here is that any means necessary is sought when shell access is not available -- that is how I landed on this page, and PhpMyAdmin saved me!

Having trouble running cakephp app on remote server

If you get:
Error: SQLSTATE[42000]: Syntax error or access violation: 1104 The SELECT would examine more than MAX_JOIN_SIZE rows; check your WHERE and use SET SQL_BIG_SELECTS=1 or SET MAX_JOIN_SIZE=# if the SELECT is okay
After uploading cake php app and database from xampp localhost to a remote server.
Having tried importing the cake database into a new db on local machine and works fine. So I couldn't see it being the information imported.
Had no idea how to fix this. Its a simple and common problem with an easy fix as below.
After much hair pulling I managed to find the problem/fix with the help of my good friend ten1 on cakephp IRC chat.
When this is a cakephp specific issue which it was in my case you need to do the un-thinkable and edit the core.
The file you need to edit is AclNode.php Located here: /lib/Cake/Model/AclNode.php
You need to add a line before line 113
112 }
$db->query('SET SQL_BIG_SELECTS=1'); //Add this line
113 $result = $db->read($this, $queryData, -1);
114 $path = array_values($path);
This is generally only a problem on servers with shared hosting.
Rather than editing core file you could add a beforeFind method to your app/models/app_model.php file, if wanted it to affect all over or to your particular model file, like the following:
function beforeFind() {
$this->query('SET SQL_BIG_SELECTS=1');
}
For Cakephp 3 following works:
'Datasources' => [
'default' => [
'init' => array(
PDO::MYSQL_ATTR_INIT_COMMAND => 'SET SESSION SQL_BIG_SELECTS=1',
), // Add this to the existing array