I have a master server which imports data into a MySQL database on a slave server, and I'm trying to keep this running until the slave server reaches near capacity. I've done this by setting up cron jobs in a chef recipe, which runs 20 different ruby files every 15 mins. Everything works fine for about an hour, then the database stops taking entries. Can anybody shed some light on this please?
Things I've done:
1)logged into database and added entries. This is fine.
2)checked cron log to see that jobs have run. This is fine.
Here is one of my import files, but I don't think there is any problem with it
require 'rubygems'
require 'mysql2'
require 'open-uri'
#my = Mysql2::Client.new(hostname,username,password,database)
con = Mysql2::Client.new(:hostname => 'ipaddress',
:username => 'username',
:password => 'password',
:database => 'database')
con.query("CREATE TABLE IF NOT EXISTS words(id INT PRIMARY KEY AUTO_INCREMENT,value VARCHAR(50));")
open('http://samplewebsite.txt').each_line do |word|
word.split(" ").each do |str|
outstring = str.gsub(/[^a-zA-Z0-9\-]/,"")
con.query("INSERT INTO words(value)
VALUES('"+outstring+"');")
end
end
puts "Done!"
con.close
Thanks in advance
Edit: After thinking about this for a little while I realized my mistake (and from people's answers below), I was hitting the same website too many times from my server, so I was getting blocked out. I've modified my recipes to go to different sites so they're more spread out and things are working fine now. Thanks to anyone wjo answered!
Related
I have trouble understanding why Cloud SQL is marginally slower than my localhost MySQL server.
I can accept that a little variation can occur from configuration to configuration but...
In localhost it only takes 240 milliseconds to fulfill a query that just returns all records from a table with its relationships (there's only 90 records) while in Cloud SQL it takes almost 30 seconds. And It's just like 28kb of data.
I am using PHP with Laravel 5.5. According to all the GCP docs I have all the configuration needed.
I don't think it is a quota problem.
Using second gen Cloud SQL and connecting from App Engine flex environment with auto-scale.
I did improve the server specs of the Cloud SQL instance to 1.7 GB RAM. I don't think it is a server performance related issue...
Thanks.
Edit
This is confusing.
I am debugging the raw SQL strings being executed and when I use DD to debug I am preventing the JSON response and it takes just 1 second. A lot better by some reason I can't understand.
This is the one ending faster
DB::enableQueryLog();
Contact::with([
'rooms',
'bathrooms',
'parkingLots',
'generalInterests',
'locationInterests',
'strongInterests',
'phonePrefix',
'source',
'contactType'
])->get();
dd(DB::getQueryLog());
return new JsonResponse([
'query' => 'all',
'results' => Contact::with([
'rooms',
'bathrooms',
'parkingLots',
'generalInterests',
'locationInterests',
'strongInterests',
'phonePrefix',
'source',
'contactType'
])->get()
], 200);
And this is the one that just takes the same loooong time
DB::enableQueryLog();
Contact::with([
'rooms',
'bathrooms',
'parkingLots',
'generalInterests',
'locationInterests',
'strongInterests',
'phonePrefix',
'source',
'contactType'
])->get();
Log::debug(print_r(DB::getQueryLog(), true));
return new JsonResponse([
'query' => 'all',
'results' => Contact::with([
'rooms',
'bathrooms',
'parkingLots',
'generalInterests',
'locationInterests',
'strongInterests',
'phonePrefix',
'source',
'contactType'
])->get()
], 200);
As you can see, the only difference is the return being reached or not.
Edit 2
Problem found
Turns out that when returning the JSON response, the Eloquent model processes the appenders of the model which make some requests to another API REST and it does that for each model in the result set.
In localhost, that same operation just returns null before making the request. So, there's my problem.
Nothing to do with MySQL, GCP, Laravel, or PHP. Just my own dumbness.
Thank you very much for taking the time to read this absurd question. My most sincere apologies.
Have you checked the throughput of your Internet connection? That is the main difference between your 2 environments. Other ways to determine the cause include: running a query straight from SQL, checking your logs (especially the SQL DB log), trying the same query from another location.
In my experience, running a query straight from SQL may give you the best help. Just a basic SELECT * from my_table; and see how long that takes. And I found this document on the Google site, it might help you with investigation and tuning tips. It mentions AppEngine limits, but with that amount of data, I don't think you are running into any of those.
All my previous projects used DatabaseCleaner, so I'm used to starting with an empty DB and creating test data within each test with FactoryGirl.
Currently, I'm working on a project that has a test database with many records. It is an sql file that all developers must import in their local test environments. The same DB is imported in the continuous integration server. I feel like having less control over the test data makes the testing process harder.
Some features allow their tests to focus on specific data, such as records that are associated to a certain user. In those cases, the preexisting data is irrelevant. Other features such as a report that displays projects of all clients do not allow me to "ignore" the preexisting data.
Is there any way to ignore the test DB contents in some tests (emulate an empty DB and create my own test data without actually deleting all rows in the test DB)? Maybe have two databases (both in the same MySQL server) and being able to switch between them (e.g., some tests use one DB, other tests use the other DB)?
Any other recommendations on how deal with to this scenario?
Thank you.
I would recommend preserving your test_database, and the 'test' environment as your 'clean' state. Then you could setup a separate database that you initially seed as your 'dirty' database. A before hook in your rails_helper file could also be setup with something like this:
RSpec.configure do |config|
config.before :each, type: :feature do |example|
if ENV['TEST_DIRTY'] || example.metadata[:test_dirty]
ActiveRecord::Base.establish_connection(
{
:adapter => 'mysql2',
:database => 'test_dirty',
:host => '127.0.0.1',
:username => 'root',
:password => 'password'
}
)
end
end
end
Your database.yml file will need configurations added for your 'dirty' database. But I think the key here is keeping your clean and dirty states separate. Cheers!
I have found that adding the following configuration to spec/rails_helper.rb will run all DB operations inside tests or before(:each) blocks as transactions, which are rolled back after each test is finished. That means we can do something like before(:each) { MyModel.delete_all }, create our own test data, run our assertions (which will only see the data we created) and after the end of the test, all preexisting data will still be in the DB because the deletion will be rolled back.
RSpec.configure do |config|
config.use_transactional_fixtures = true
end
I'd like to dump my databases to a file.
Certain website hosts don't allow remote or command line access, so I have to do this using a series of queries.
All of the related questions say "use mysqldump" which is a great tool but I don't have command line access to this database.
I'd like CREATE and INSERT commands to be created at the same time - basically, the same performance as mysqldump. Is SELECT INTO OUTFILE the right road to travel, or is there something else I'm overlooking - or maybe it's not possible?
Use mysqldump-php a pure-PHP solution to replicate the function of the mysqldump executable for basic to med complexity use cases - I understand you may not have remote CLI and/or mysql direct access, but so long as you can execute via an HTTP request on a httpd on the host this will work:
So you should be able to just run the following purely PHP script straight from a secure-directory in /www/ and have an output file written there and grab it with a wget.
mysqldump-php - Pure PHP mysqldump on GitHub
PHP example:
<?php
require('database_connection.php');
require('mysql-dump.php')
$dumpSettings = array(
'include-tables' => array('table1', 'table2'),
'exclude-tables' => array('table3', 'table4'),
'compress' => CompressMethod::GZIP, /* CompressMethod::[GZIP, BZIP2, NONE] */
'no-data' => false,
'add-drop-table' => false,
'single-transaction' => true,
'lock-tables' => false,
'add-locks' => true,
'extended-insert' => true
);
$dump = new MySQLDump('database','database_user','database_pass','localhost', $dumpSettings);
$dump->start('forum_dump.sql.gz');
?>
With your hands tied by your host, you may have to take a rather extreme approach. Using any scripting option your host provides, you can achieve this with just a little difficulty. You can create a secure web page or strait text dump link known only to you and sufficiently secured to prevent all unauthorized access. The script to build the page/text contents could be written to follow these steps:
For each database you want to back up:
Step 1: Run SHOW TABLES.
Step 2: For each table name returned by the above query, run SHOW CREATE TABLE to get the create statement that you could run on another server to recreate the table and output the results to the web page. You may have to prepend "DROP TABLE X IF EXISTS;" before each create statement generated by the results of these queryies (!not in your query input!).
Step 3: For each table name returned from step 1 again, run a SELECT * query and capture full results. You will need to apply a bulk transformation to this query result before outputing to screen to convert each line into an INSERT INTO tblX statement and output the final transformed results to the web page/text file download.
The final web page/text download would have an output of all create statements with "drop table if exists" safeguards, and insert statements. Save the output to your own machine as a ".sql" file, and execute on any backup host as needed.
I'm sorry you have to go through with this. Note that preserving mysql user accounts that you need is something else entirely.
Use / Install PhpMySQLAdmin on your web server and click export. Many web hosts already offer you this as a service pre-configured, and it's easy to install if you don't already have it (pure php): http://www.phpmyadmin.net/
This allows you to export your database(s), as well as perform other otherwise tedious database operations very quickly and easily -- and it works for older versions of PHP < 5.3 (unlike the Mysqldump.php offered as another answer here).
I am aware that the question states 'using query' but I believe the point here is that any means necessary is sought when shell access is not available -- that is how I landed on this page, and PhpMyAdmin saved me!
I perodicially need to access a mysql database, my primary data store is mongo, which I access with mongoid. I want to know the best way to manage connections to mysql (with the mysql2 gem - 0.2.7) without using active record.
I current do the following ...
# In config/initializers/mysql.rb
class MySqlConnection
def self.client
#client ||= Mysql2::Client.new(host: ENV['mysql_host'],
username: ENV['mysql_username'],
password: ENV['mysql_password'],
database: ENV['mysql_database'])
end
end
and then I use connection, like so ...
rows_q = "SELECT * FROM amaizng_table WHERE great_column = '#{cool_value}' "
rows = ::MySqlConnection.client.query(rows_q)
And everything is working okay -- but I have a sneaking suspicion that I am doing something horribly wrong, and things are going to explode down the road.
Also Note, the application is hosted on heroku
Anyone know the best way to approach this?
Thanks!
Jonathan
why, just WHY would you get rid of ActiveRecord's awesomeness (or any other ORM, really) ?
class Amazing < ActiveRecord::Base
establish_connection :mysql_database
end
so simple it hurts. See this for more details.
I've been searching all over for tips on this and have not really had any luck so far. With the mysql2 gem, trying to execute a stored procedure that returns multiple result sets gives me an unable to return results in this context error. I found someone had suggested to use the mysql gem instead (which I can't find an explanation of what's different between the two and what I might encounter by switching), and with that I've had more progress.
Here's what I have so far:
>> db = ActiveRecord::Base.connection.raw_connection
=> #<Mysql:0x1056ae3d8>
>> ActiveRecord::Base.connection.select_all("CALL p_rpt_test('', '');")
=> [{"Header"=>"Client,Project,Type,Due Date,Assigned To"}]
>> db.more_results?
=> true
>> db.next_result
Mysql::Error: Commands out of sync; you can't run this command now
from (irb):3:in `next_result'
from (irb):3
Does anyone know of a way to get this to work, with mysql2 or mysql gems? The app is running rails 3.0.1.
Ok well I have no figured out how to get AR to do this so I've ended up just going low level and using the mysql driver itself, which mostly works...
data = Array.new
db = ActiveRecord::Base.connection.raw_connection
header = db.query("CALL #{self.proc}(#{args});")
header.each {|r| data << r}
if db.next_result
rows = db.store_result
rows.each {|r| data << r}
end
ActiveRecord::Base.connection.reconnect!
It works, but I can't imagine there's not a better way. Also I have to reconnect after this or I get an error on the next query, and I haven't found a way to properly close the session. Oh and I have to use the mysql gem and not mysql2.
Grrrrr.
We can use header.to_hash to get an array of hash, or header.rows to get an array of array.
Follow this http://api.rubyonrails.org/classes/ActiveRecord/Result.html