Ruby script end before sql query finish - mysql

I run a SQL query with a ruby script that should take around 2 hours.
How I can make sure the script will exit/end only when the process of the query finish, because right now I ran the script, it pass the query to the DB, and the script immediately close while the query still running on the DB.
most of the query is commands like inserts, drop tables, create tables.
#!/usr/bin/env ruby
require 'mysql2'
client = Mysql2::Client.new(:host => ENV_YML['host'], :username => ENV_YML['username'], :password => ENV_YML['password'], :database => ENV_YML['dbtemp'], :flags => Mysql2::Client::MULTI_STATEMENTS)
client.query("
...
")
I want to run this query only after the first one finish
client.query("SELECT ;").each do |row|
....
end
Any idea how to wait for the query to finish, because I want to add another query in the same script that check the first query after it finish.

From the official documentation:
Multiple result sets
You can also retrieve multiple result sets. For this to work you need
to connect with flags Mysql2::Client::MULTI_STATEMENTS. Multiple
result sets can be used with stored procedures that return more than
one result set, and for bundling several SQL statements into a single
call to client.query.
client = Mysql2::Client.new(:host => "localhost", :username => "root", :flags => Mysql2::Client::MULTI_STATEMENTS)
result = client.query('...')
while client.next_result
result = client.store_result
# result now contains the next result set
end

Related

Rails & RSpec: test DB with preexisting records

All my previous projects used DatabaseCleaner, so I'm used to starting with an empty DB and creating test data within each test with FactoryGirl.
Currently, I'm working on a project that has a test database with many records. It is an sql file that all developers must import in their local test environments. The same DB is imported in the continuous integration server. I feel like having less control over the test data makes the testing process harder.
Some features allow their tests to focus on specific data, such as records that are associated to a certain user. In those cases, the preexisting data is irrelevant. Other features such as a report that displays projects of all clients do not allow me to "ignore" the preexisting data.
Is there any way to ignore the test DB contents in some tests (emulate an empty DB and create my own test data without actually deleting all rows in the test DB)? Maybe have two databases (both in the same MySQL server) and being able to switch between them (e.g., some tests use one DB, other tests use the other DB)?
Any other recommendations on how deal with to this scenario?
Thank you.
I would recommend preserving your test_database, and the 'test' environment as your 'clean' state. Then you could setup a separate database that you initially seed as your 'dirty' database. A before hook in your rails_helper file could also be setup with something like this:
RSpec.configure do |config|
config.before :each, type: :feature do |example|
if ENV['TEST_DIRTY'] || example.metadata[:test_dirty]
ActiveRecord::Base.establish_connection(
{
:adapter => 'mysql2',
:database => 'test_dirty',
:host => '127.0.0.1',
:username => 'root',
:password => 'password'
}
)
end
end
end
Your database.yml file will need configurations added for your 'dirty' database. But I think the key here is keeping your clean and dirty states separate. Cheers!
I have found that adding the following configuration to spec/rails_helper.rb will run all DB operations inside tests or before(:each) blocks as transactions, which are rolled back after each test is finished. That means we can do something like before(:each) { MyModel.delete_all }, create our own test data, run our assertions (which will only see the data we created) and after the end of the test, all preexisting data will still be in the DB because the deletion will be rolled back.
RSpec.configure do |config|
config.use_transactional_fixtures = true
end

MySql on ec2 linux instance stops taking data from import scripts

I have a master server which imports data into a MySQL database on a slave server, and I'm trying to keep this running until the slave server reaches near capacity. I've done this by setting up cron jobs in a chef recipe, which runs 20 different ruby files every 15 mins. Everything works fine for about an hour, then the database stops taking entries. Can anybody shed some light on this please?
Things I've done:
1)logged into database and added entries. This is fine.
2)checked cron log to see that jobs have run. This is fine.
Here is one of my import files, but I don't think there is any problem with it
require 'rubygems'
require 'mysql2'
require 'open-uri'
#my = Mysql2::Client.new(hostname,username,password,database)
con = Mysql2::Client.new(:hostname => 'ipaddress',
:username => 'username',
:password => 'password',
:database => 'database')
con.query("CREATE TABLE IF NOT EXISTS words(id INT PRIMARY KEY AUTO_INCREMENT,value VARCHAR(50));")
open('http://samplewebsite.txt').each_line do |word|
word.split(" ").each do |str|
outstring = str.gsub(/[^a-zA-Z0-9\-]/,"")
con.query("INSERT INTO words(value)
VALUES('"+outstring+"');")
end
end
puts "Done!"
con.close
Thanks in advance
Edit: After thinking about this for a little while I realized my mistake (and from people's answers below), I was hitting the same website too many times from my server, so I was getting blocked out. I've modified my recipes to go to different sites so they're more spread out and things are working fine now. Thanks to anyone wjo answered!

store mysql query information chef

I am trying to query my mysql database. I am using the database cookbook and can setup a connection with my database. I trying to query my database for information so now the question is how do I store than information so I can access it in another resource. Where do the results of the query get stored? This is my recipe:
mysql_database "Get admin users" do
connection mysql_connection_info
sql "Select * from #{table_name}"
action :query
end
Thanks in advance
If you don't have experience with Ruby, this might be really confusing. There's no way to "return" the result of a provider from a Chef resource. The mysql_database is a Chef::Recipe DSL method that gets translated to Chef::Provider::Database::Mysql at runtime. This provider is defined in the cookbook.
If you take some time to dive into that provider, you'll can see how it executes queries, using the db object. In order to get the results of a query, you'll need to create your own connection object in the recipe and execute a command against it. For example
require 'mysql'
db = ::Mysql.new('host', 'username', 'password', nil, 'port', 'socket') # varies with setup
users = db.query('SELECT * FROM users')
#
# You might need to manipulate the result into a more manageable data
# structure by splitting on a carriage return, etc...
#
# Assume the new object is an Array where each entry is a username.
#
file '/etc/group' do
contents users.join("\n")
end
I find using good old Chef::Mixin:ShellOut / shell_out() fairly sufficient for this job and it's DB agnostic (assuming you know your SQL :) ). It works particularly well if all you are querying is one value; for multiple rows/columns you will need to parse the SQL query results. You need to hide row counts, column headers, eat preceding white-space, etc. from your result set to just get the query results you want. For example, below works on SQL Server :
Single item
so = shell_out!("sqlcmd ... -Q \"set nocount on; select file_name(1)\" -h-1 -W")
db_logical_name = so.stdout.chop
Multiple rows/columns (0-based position of a value within a row tells you what this column is)
so = shell_out!("sqlcmd ... -Q \"set nocount on; select * from my_table\" -h-1 -W")
rows_column_data = so.stdout.chop
# columns within rows are space separated, so can be easily parsed

Rails MySql:Empty result in development mode

i got this call in my controller:
#tournaments = Tournament.unignored.all(
:include => [:matches,:sport,:category],
:conditions=> ["matches.status in (0,4)
&& matches.date < ?",
Time.now.end_of_week + 1.day],
:order => "sports.sort ASC, categories.sort ASC, tournaments.sort ASC")
All works out in production mode and in the development console as well. But when I try to browse to that certain page in development mode i get:
The error occurred while evaluating nil.each
When I paste the created SQL Query into MySQL Browser there are results.
It refers to mysql2 (0.2.11) lib/active_record/connection_adapters/mysql2_adapter.rb:587:in `select'
The query arrives correctly in this one.
Did anyone had similar problems? This error came out of nowhere. No updates etc...
Thanks!
Rails 3.0.9 MySql 5.5 Ruby 1.8.7 and mysql2 0.2.11 gem
It looks like you need to use :joins instead of :include.
the :include option to all (and find and where etc) tells rails to separately do a query to load all the ncessary data for the given associated records.
The :join option gets rails to perform an SQL query that JOIN`s the associated models so you can query on their fields.
If you want to both query on the fields and preload them into the associations, you need to do both:
#tournaments = Tournament.unignored.all(
:include => [:matches,:sport,:category],
:joins => [:matches,:sport,:category],
:conditions=> ["matches.status in (0,4)
&& matches.date < ?",
Time.now.end_of_week + 1.day],
:order => "sports.sort ASC, categories.sort ASC, tournaments.sort ASC")

How to seed Devise users efficiently?

I'm trying to seed about 100,000 users using rake db:seed in my Rails 3 project and it is really slow!
Here's the code sample:
# ...
User.create!(
:display_name => "#{title} #{name} #{surname}",
:email => "#{name}.#{surname}_#{num}#localtinkers.com",
:password => '12341234'
)
It works, but it is really slow because for each user:
Devise issues a SELECT statement to find out if the email is already taken.
A separate INSERT statement is issued.
For other objects I use "ar-extensions" and "activerecord-import" gems as follows:
tags.each do |tag|
all_tags << Tag.new(:name => tag)
end
Tag.import(all_tags, :validate => false, :ignore => true)
The above creates just one INSERT statement for all the tags and it works really fast, just like MySql database restore from the SQL dump.
But for users I cannot do this because I need Devise to generate encrypted password, salt, etc for each user. Is there a way to generate them on the SQL side or are there other efficient ways of seeding users?
Thank you.
How about:
u = User.new(
:display_name => "#{title} #{name} #{surname}",
:email => "#{name}.#{surname}_#{num}#localtinkers.com",
:password => '12341234'
)
u.save!(:validate => false)
This should create and save the record without executing the validations, and therefore without checking for e-mail address uniqueness. Obviously the downside of this is that you're not being protected by any other validations on the user too, so make sure you check your data first!