store mysql query information chef - mysql

I am trying to query my mysql database. I am using the database cookbook and can setup a connection with my database. I trying to query my database for information so now the question is how do I store than information so I can access it in another resource. Where do the results of the query get stored? This is my recipe:
mysql_database "Get admin users" do
connection mysql_connection_info
sql "Select * from #{table_name}"
action :query
end
Thanks in advance

If you don't have experience with Ruby, this might be really confusing. There's no way to "return" the result of a provider from a Chef resource. The mysql_database is a Chef::Recipe DSL method that gets translated to Chef::Provider::Database::Mysql at runtime. This provider is defined in the cookbook.
If you take some time to dive into that provider, you'll can see how it executes queries, using the db object. In order to get the results of a query, you'll need to create your own connection object in the recipe and execute a command against it. For example
require 'mysql'
db = ::Mysql.new('host', 'username', 'password', nil, 'port', 'socket') # varies with setup
users = db.query('SELECT * FROM users')
#
# You might need to manipulate the result into a more manageable data
# structure by splitting on a carriage return, etc...
#
# Assume the new object is an Array where each entry is a username.
#
file '/etc/group' do
contents users.join("\n")
end

I find using good old Chef::Mixin:ShellOut / shell_out() fairly sufficient for this job and it's DB agnostic (assuming you know your SQL :) ). It works particularly well if all you are querying is one value; for multiple rows/columns you will need to parse the SQL query results. You need to hide row counts, column headers, eat preceding white-space, etc. from your result set to just get the query results you want. For example, below works on SQL Server :
Single item
so = shell_out!("sqlcmd ... -Q \"set nocount on; select file_name(1)\" -h-1 -W")
db_logical_name = so.stdout.chop
Multiple rows/columns (0-based position of a value within a row tells you what this column is)
so = shell_out!("sqlcmd ... -Q \"set nocount on; select * from my_table\" -h-1 -W")
rows_column_data = so.stdout.chop
# columns within rows are space separated, so can be easily parsed

Related

How to fix mysql uppercase query in php and mysql

I am currently working on the website that uses ADODB library. In entire website all the queries are written in UPPERCASE.
The problem is when I run the query it doesn't work because of table name which is UPPERCASE. But when I change the table name to lowercase it works.
$sql = "SELECT * FROM MEMBERS where USERNAME = '$username'";
$db = ADONewConnection('mysql');
$db->debug = true;
$db->Connect(DB_HOSTNAME, DB_USERNAME, DB_PASSWORD, DB_NAME);
$resultFriends = $db->Execute($sql);
while ($row = $resultFriends->FetchRow()) {
var_dump($row);
die;
}
Here is the error I get:
ADOConnection._Execute(SELECT * FROM MEMBERS where USERNAME = 'fury', false) % line 1012, file: adodb.inc.php
ADOConnection.Execute(SELECT * FROM MEMBERS where USERNAME = 'fury') % line 15, file: index.php
Bear in mind I don't want to change the scripts. There are 1000 files and 10000 places.
Is there any library or are there any way that I can run this queries without error?
The version for live sire was linux kernel. but the new dev site is ubuntu.
I have done this on ubuntu/ mysql CML and it didn't work.
The solution is I had to reconfigure the mySql database in AWS/rdbs
You have to modify the “lower_case_table_names” parameter for your DB Instance(s). Prior to today, the lower_case_table_names parameter was not modifiable, with a system default of zero (0) or “table names stored as specified and comparisons are case sensitive.” Beginning immediately, values of zero and one (table names are stored in lowercase and comparisons are not case sensitive) are allowed. See the MySQL documentation for more information on the lower_case_table_names parameter.
The lower_case_table_names parameter can be specified via the rds-modify-db-parameter-group API. Simply include the parameter name and specify the desired value, such as in the following example:
rds-modify-db-parameter-group example --parameters "name=lower_case_table_names, value=1, method=pending-reboot" --region us-east-1
Support for modifying parameters via the AWS Management Console is expected to be added later this year.
setting the lower_case_table_names parameter via a custom DB Parameter Group and doing so before creating an associated DB Instance. Changing the parameter for existing DB Instances could cause inconsistencies with point-in-time recovery backups and with Read Replicas.
Amazon RDS

How to update a row in a MySQL database using Ruby's Sequel toolkit?

This should be the simplest thing but for some reason it's eluding me completely.
I have a Sequel connection to a database named DB. It's using the Mysql2 engine if that's important.
I'm trying to update a single record in a table in the database. The short loop I'm using looks like this:
dataset = DB["SELECT post_id, message FROM xf_post WHERE message LIKE '%#{match}%'"]
dataset.each do |row|
new_message = process_message(row[:message])
# HERE IS WHERE I WANT TO UPDATE THE ROW IN THE DATABASE!
end
I've tried:
dataset.where('post_id = ?', row[:post_id]).update(message: new_message)
Which is what the Sequel cheat sheet recommends.
And:
DB["UPDATE xf_post SET message = ? WHERE post_id = ?", new_message, row[:post_id]]
Which should be raw SQL executed by the Sequel connector. Neither throws an error or outputs any error message (I'm using a logger with the Sequel connection). But both calls fail to update the records in the database. The data is unchanged when I query the database after running the code.
How can I make the update call function properly here?
Your problem is you are using a raw SQL dataset, so the where call isn't going to change the SQL, and update is just going to execute the raw SQL. Here's what you want to do:
dataset = DB[:xf_post].select(:post_id, :message).
where(Sequel.like(:message, "%#{match}%"))
That will make the where/update combination work.
Note that your original code has a trivial SQL injection vulnerability if match depends on user input, which this new code avoids. You may want to consider using Dataset#escape_like if you want to escape metacharacters inside match, otherwise if match depends on user input, it's possible for users to use very complex matching syntax that the database may execute slowly or not handle properly.
Note that the reason that
DB["UPDATE xf_post SET message = ? WHERE post_id = ?", new_message, row[:post_id]]
doesn't work is because it only creates a dataset, it doesn't execute it. You can actually call update on that dataset to run the query and return number of affected rows.

Ruby MySQL output conflicting on different servers

I have coded a Ruby IRC bot which is on github (/ninjex/rubot) which is having some conflicting output with MySQL on a dedicated server I just purchased.
Firstly we have the connection to the database in the MySQL folder (in .gitignore) which looks similar to the following code block.
#con = Mysql.new('localhost', 'root', 'pword', 'db_name')
Then we have an actual function to query the database
def db_query
que = get_message # Grabs query from user i.e,./db_query SELECT * FROM words
results = #con.query(que) # Send query through the connection i.e, #con.query("SELECT * FROM WORDS")
results.each {|x| chan_send(x)} # For each row returned, send it to the channel via
end
On my local machine, when running the command:
./db_query SELECT amount, user from words WHERE user = 'Bob' and word = 'hello'
I receive the output in IRC in an Array like fashion: ["17", "Bob"] Where 17 is amount and Bob is the user.
However, using this same function on my dedicated server results in an output like: 17Bob I have attempted many changes in the code, as well as try to parse the data into it's own variable, however it seems that 17Bob is coming out as a single variable, making it impossible to parse into something like an array, which I could then use to send the data correctly.
This seems odd to me on both my local machine and the dedicated server, as I was expecting the output to first send 17 to the IRC and then Bob like:
17
Bob
For all the functions and source you can check my github /Ninjex/rubot, however you may need to install some gems.
A few notes:
Make sure you are sanitizing query via get_message. Or you are opening yourself up to some serious security problems.
Ensure you are using the same versions of the mysql gem, ruby and MySql. Differences in any of these may alter the expected output.
If you are at your wits end and are unable to resolve the underlying issue, you can always send a custom delimiter and use it to split. Unfortunately, it will muck up the case that is actually working and will need to be stripped out.
Here's how I would approach debugging the issue on the dedicated machine:
def db_query
que = get_sanitized_message
results = #con.query(que)
require 'pry'
binding.pry
results.each {|x| chan_send(x)}
end
Add the pry gem to your Gemfile, or gem install pry.
Update your code to use pry: see above
This will open up a pry console when the binding.pry line is hit and you can interrogate almost everything in your running application.
I would take a look at results and see if it's an array. Just type results in the console and it will print out the value. Also type out results.class. It's possible that query is returning some special result set object that is not an array, but that has a method to access the result array.
If results is an array, then the issue is most likely in chan_send. Perhaps it needs to be using something like puts vs print to ensure there's a new line after each message. Is it possible that you have different versions of your codebase deployed? I would also add a sleep 1 within the each block to ensure that this is not related to your handling of messages arriving at the same time.

Backup database(s) using query without using mysqldump

I'd like to dump my databases to a file.
Certain website hosts don't allow remote or command line access, so I have to do this using a series of queries.
All of the related questions say "use mysqldump" which is a great tool but I don't have command line access to this database.
I'd like CREATE and INSERT commands to be created at the same time - basically, the same performance as mysqldump. Is SELECT INTO OUTFILE the right road to travel, or is there something else I'm overlooking - or maybe it's not possible?
Use mysqldump-php a pure-PHP solution to replicate the function of the mysqldump executable for basic to med complexity use cases - I understand you may not have remote CLI and/or mysql direct access, but so long as you can execute via an HTTP request on a httpd on the host this will work:
So you should be able to just run the following purely PHP script straight from a secure-directory in /www/ and have an output file written there and grab it with a wget.
mysqldump-php - Pure PHP mysqldump on GitHub
PHP example:
<?php
require('database_connection.php');
require('mysql-dump.php')
$dumpSettings = array(
'include-tables' => array('table1', 'table2'),
'exclude-tables' => array('table3', 'table4'),
'compress' => CompressMethod::GZIP, /* CompressMethod::[GZIP, BZIP2, NONE] */
'no-data' => false,
'add-drop-table' => false,
'single-transaction' => true,
'lock-tables' => false,
'add-locks' => true,
'extended-insert' => true
);
$dump = new MySQLDump('database','database_user','database_pass','localhost', $dumpSettings);
$dump->start('forum_dump.sql.gz');
?>
With your hands tied by your host, you may have to take a rather extreme approach. Using any scripting option your host provides, you can achieve this with just a little difficulty. You can create a secure web page or strait text dump link known only to you and sufficiently secured to prevent all unauthorized access. The script to build the page/text contents could be written to follow these steps:
For each database you want to back up:
Step 1: Run SHOW TABLES.
Step 2: For each table name returned by the above query, run SHOW CREATE TABLE to get the create statement that you could run on another server to recreate the table and output the results to the web page. You may have to prepend "DROP TABLE X IF EXISTS;" before each create statement generated by the results of these queryies (!not in your query input!).
Step 3: For each table name returned from step 1 again, run a SELECT * query and capture full results. You will need to apply a bulk transformation to this query result before outputing to screen to convert each line into an INSERT INTO tblX statement and output the final transformed results to the web page/text file download.
The final web page/text download would have an output of all create statements with "drop table if exists" safeguards, and insert statements. Save the output to your own machine as a ".sql" file, and execute on any backup host as needed.
I'm sorry you have to go through with this. Note that preserving mysql user accounts that you need is something else entirely.
Use / Install PhpMySQLAdmin on your web server and click export. Many web hosts already offer you this as a service pre-configured, and it's easy to install if you don't already have it (pure php): http://www.phpmyadmin.net/
This allows you to export your database(s), as well as perform other otherwise tedious database operations very quickly and easily -- and it works for older versions of PHP < 5.3 (unlike the Mysqldump.php offered as another answer here).
I am aware that the question states 'using query' but I believe the point here is that any means necessary is sought when shell access is not available -- that is how I landed on this page, and PhpMyAdmin saved me!

How can I get the database name from a Perl MySQL DBI handle?

I've connected to a MySQL database using Perl DBI. I would like to find out which database I'm connected to.
I don't think I can use:
$dbh->{Name}
because I call USE new_database and $dbh->{Name} only reports the database that I initially connected to.
Is there any trick or do I need to keep track of the database name?
Try just executing the query
select DATABASE();
From what I could find, the DBH has access to the DSN that you initially connected with, but not after you made the change. (There's probably a better way to switch databases.)
$dbh->{Name} returns the db name from your db handle.
If you connected to another db after connected with your dbh, using mysql query "USE db_name", and you did not setup a new perl DBI db handle, of course, $dbh->{Name} will return the first you previously connected to... It's not spontaneic generation.
So to get the connected db name once the db handle is set up - for DBI mysql:
sub get_dbname {
my ($dbh) = #_;
my $connected_db = $dbh->{name};
$connected_db =~ s/^dbname=([^;].*);host.*$/$1/;
return $connected_db;
}
You can ask mysql:
($dbname) = (each %{$dbh->selectrow_hashref("show tables")}) =~ /^Tables_in_(.*)/;
Update: obviously select DATABASE() is a better way to do it :)
When you create a connection object it is for a certain database. In DBI's case anyway. I I don't believe doing the SQL USE database_name will affect your connection instance at all. Maybe there is a select_db (My DBI is rusty) function for the connection object or you'll have to create a new connection to the new database for the connection instance to properly report it.
FWIW - probably not much - DBD::Informix keeps track of the current database, which can change if you do operations such as CREATE DATABASE. The $dbh->{Name} attribute is specified by the DBI spec as the name used when the handle is established. Consequently, there is an Informix-specific attribute $dbh->{ix_DatabaseName} that provides the actual current database name. See: perldoc DBD::Informix.
You could consider requesting the maintainer(s) of DBD::MySQL add a similar attribute.