View database entries created by factory_girl in a mysql database - mysql

Is it possible to view the database entries (for example with PhpMyAdmin) which were created by a factory? My tests are successful, so the Database entry should exist. But when i add sleep(60) to my test (after creating the entry), i can't find any database entries in my database.

In most setups for FactoryGirl, your database entries will be inserted in a transaction that is never committed. That means the records will never be visible outside that one test.
If you're using RSpec, you can set config.use_transactional_fixtures = false.
If you're using DatabaseCleaner, you can use DatabaseCleaner.strategy = :truncation.
After doing this, the transaction will be committed and records will be visible outside the test. This will likely make your tests a little slower.

Related

Partially commit MySQL Transaction?

I want to know if there's a way to commit a transaction partially. I have a long running transaction in C# and when two users are running this transaction parallel to each other, the data is co-dependent and should show to them both even while in the transaction. For example say I have a table with these 3 columns
username | left_child | right_child
I am making a binary tree and whenever a new user is added into the database they end up somewhere in the tree. But I am running all of the insertions and updates in one transaction so if there's even one error the whole transaction can be rolled back and the structure of the tree is not disturbed. But the problem is the when two users are using my web app at the same time.
Say that username 'jackie_' does not have any children at the minute. Two new users 'king_' and 'robbo' enter parallel to each other and the transaction is running for both of them. Since the results of the transaction running for one user are not visible to the other user in the actual database yet, they both think that the left_child of 'jackie_' hasn't been set yet and so they both update the left_child to their own username. During the transaction since the update was successful for both of them, they both commit the transaction. Now I have two users but only one of them is actually successfully entered into the tree and the structure of the tree is disturbed completely.
So what I need is to be able to commit one transaction even during, "partially". So if 'robbo' got set the left_child of 'jackie_' first, the transaction implements the change into the database so when 'king_' tries to update the same row, he can't. But if along the way, if some other problem occurs for 'robbo' I still want to be able to rollback the whole transaction. Any other solutions which would be more practical are appreciated as well.
For all the queries that I am running, this is the way I am doing it in the transaction
string insertTreesQuery = "INSERT INTO tree (username) VALUES('king_')";
MySqlCommand insertTreesQueryCmd = new MySqlCommand(insertTreesQuery , con);
insertTreesQueryCmd.Transaction = sqlTrans;
insertTreesQueryCmd.ExecuteNonQuery();
where sqlTrans is the transaction that I am using for all the MySqlCommand objects before executing them
What you are asking is not possible. You cannot "partially" commit a transaction. I'm assuming that your example is greatly simplified since you mention the transactions are very large. In that case, it would probably be best to split it up into smaller ones that can be committed independently, thus reducing the chance of there being a conflict.

MySQL performing a "No impact" temporary INSERT with replication avoiding Locks

SO, we are trying to run a Report going to screen, which will not change any stored data.
However, it is complex, so needs to go through a couple of (TEMPORARY*) tables.
It pulls data from live tables, which are replicated.
The nasty bit when it comes to take the "eligible" records from
temp_PreCalc
and populate them from the live data to create the next (TEMPORARY*) table output
resulting in effectively:
INSERT INTO temp_PostCalc (...)
SELECT ...
FROM temp_PreCalc
JOIN live_Tab1 ON ...
JOIN live_Tab2 ON ...
JOIN live_Tab3 ON ...
The report is not a "definitive" answer, expectation is that is merely a "snapshot" report and will be out-of-date as soon as it appears on screen.
There is no order or reproducibility issue.
So Ideally, I would turn my TRANSACTION ISOLATION LEVEL down to READ COMMITTED...
However, I can't because live_Tab1,2,3 are replicated with BIN_LOG STATEMENT type...
The statement is lovely and quick - it takes hardly any time to run, so the resource load is now less than it used to be (which did separate selects and inserts) but it waits (as I understand it) because of the SELECT that waits for a repeatable/syncable lock on the live_Tab's so that any result could be replicated safely.
In fact it now takes more time because of that wait.
I'd like to SEE that performance benefit in response time!
Except the data is written to (TEMPORARY*) tables and then thrown away.
There are no live_ table destinations - only sources...
these tables are actually not TEMPORARY TABLES but dynamically created and thrown away InnoDB Tables, as the report Calculation requires Self-join and delete... but they are temporary
I now seem to be going around in circles finding an answer.
I don't have SUPER privilege and don't want it...
So can't SET BIN_LOG=0 for this connection session (Why is this a requirement?)
So...
If I have a scratch Database or table wildcard, which excludes all my temp_ "Temporary" tables from replication...
(I am awaiting for this change to go through at my host centre)
Will MySQL allow me to
SET SESSION TRANSACTION ISOLATION LEVEL READ COMMITTED;
INSERT INTO temp_PostCalc (...)
SELECT ...
FROM temp_PreCalc
JOIN live_Tab1 ON ...
JOIN live_Tab2 ON ...
JOIN live_Tab3 ON ...
;
Or will I still get my
"Cannot Execute statement: impossible to write to binary log since
BINLOG_FORMAT = STATEMENT and at least one table uses a storage engine
limited to row-based logging..."
Even though its not technically true?
I am expecting it to, as I presume that the replication will kick in simply because it sees the "INSERT" statement, and will do a simple check on any of the tables involved being replication eligible, even though none of the destinations are actually replication eligible....
or will it pleasantly surprise me?
I really can't face using an unpleasant solution like
SELECT TO OUTFILE
LOAD DATA INFILE
In fact I dont think I could even use that - how would I get unique filenames? How would I clean them up?
The reports are run on-demand directly by end users, and I only have MySQL interface access to the server.
or streaming it through the PHP client, just to separate the INSERT from the SELECT so that MySQL doesnt get upset about which tables are replication eligible....
So, it looks like the only way appears to be:
We create a second Schema "ScratchTemp"...
Set the dreaded replication --replicate-ignore-db=ScratchTemp
My "local" query code opens a new mysql connection, and performs a USE ScratchTemp;
Because I have selected the default database of the "ignore"d one - none of my queries will be replicated.
So I need to take huge care not to perform ANY real queries here
Reference my scratch_ tables and actual data tables by prefixing them all on my queries with the schema qualified name...
e.g.
INSERT INTO LiveSchema.temp_PostCalc (...) SELECT ... FROM LiveSchema.temp_PreCalc JOIN LiveSchema.live_Tab1 etc etc as above.
And then close this connection just as soon as I can, as it is frankly dangerous to have a non-replicated connection open....
Sigh...?

Getting stale results in multiprocessing environment

I am using 2 separate processes via multiprocessing in my application. Both have access to a MySQL database via sqlalchemy core (not the ORM). One process reads data from various sources and writes them to the database. The other process just reads the data from the database.
I have a query which gets the latest record from the a table and displays the id. However it always displays the first id which was created when I started the program rather than the latest inserted id (new rows are created every few seconds).
If I use a separate MySQL tool and run the query manually I get correct results, but SQL alchemy is always giving me stale results.
Since you can see the changes your writer process is making with another MySQL tool that means your writer process is indeed committing the data (at least, if you are using InnoDB it does).
InnoDB shows you the state of the database as of when you started your transaction. Whatever other tools you are using probably have an autocommit feature turned on where a new transaction is implicitly started following each query.
To see the changes in SQLAlchemy do as zzzeek suggests and change your monitoring/reader process to begin a new transaction.
One technique I've used to do this myself is to add autocommit=True to the execution_options of my queries, e.g.:
result = conn.execute( select( [table] ).where( table.c.id == 123 ).execution_options( autocommit=True ) )
assuming you're using innodb the data on your connection will appear "stale" for as long as you keep the current transaction running, or until you commit the other transaction. In order for one process to see the data from the other process, two things need to happen: 1. the transaction that created the new data needs to be committed and 2. the current transaction, assuming it's read some of that data already, needs to be rolled back or committed and started again. See The InnoDB Transaction Model and Locking.

JDBC / MySQL query and update in a transaction

Basically our user provisioning algorithm does something like
-query for a new user
-update database to show you have that user
I'm wondering how to lock the ability for other instances of the process to do the "read" step while one has already started. So it's a little more aggressive than a typical transaction, because it needs to be a read-read lock, and of course unrelated processes should be able to read without being affected by the lock.
You can simply run the UPDATE query immediately to "steal" all inactive users for the current server.
Since individual UPDATE queries are always atomic, this will ensure that each user is only grabbed by one server.
Since MySQL does not allow you to return the updated rows from an UPDATE, you will need to add an identifier column to tell you which rows were "stolen".
Every time you provision users, pick a GUID, set the identifier column to that GUID in the UPDATE statement, then SELECT rows WHERE they still have that GUID.

MySQL table locking for a multi user JSP/Servlets site

Hi I am developing a site with JSP/Servlets running on Tomcat for the front-end and with a MySql db for the backend which is accessed through JDBC.
Many users of the site can access and write to the database at the same time ,my question is :
Do i need to explicitly take locks before each write/read access to the db in my code?
OR Does Tomcat handle this for me?
Also do you have any suggestions on how best to implement this ? I have written a significant amount of JDBC code already without taking the locks :/
I think you are thinking about transactions when you say "locks". At the lowest level, your database server already ensure that parallel read writes won't corrupt your tables.
But if you want to ensure consistency across tables, you need to employ transactions. Simply put, what transactions provide you is an all-or-nothing guarantee. That is, if you want to insert a Order in one table and related OrderItems in another table, what you need is an assurance that if insertion of OrderItems fails (in step 2), the changes made to Order tables (step 1) will also get rolled back. This way you'll never end up in a situation where an row in Order table have no associated rows in Order items.
This, off-course, is a very simplified representation of what a transaction is. You should read more about it if you are serious about database programming.
In java, you usually do transactions by roughly with following steps:
Set autocommit to false on your jdbc connection
Do several insert and/or updates using the same connection
Call conn.commit() when all the insert/updates that goes together are done
If there is a problem somewhere during step 2, call conn.rollback()