Using Postgres, SQLAlchemy 1.4 ORM, Python 3.7, Pytest.
I have a script in myproject/src/db.py and the tests for it are located in myproject/tests/.
In db.py I have a function to drop any given table, it works as expected:
async def delete_table(self, table_name):
table = self.meta.tables[table_name]
async with self.engine.begin() as conn:
await conn.run_sync(Base.metadata.drop_all(sync_engine, [table], checkfirst=True))
It gets called like:
asyncio.run(db.delete_table('user'))
In conftest.py I have a fixture like this:
#pytest.fixture(scope='function')
def test_drop_table():
def get_delete_tables(table_name):
return asyncio.run(DB.delete_table(table_name))
return get_delete_tables
In test.py I run the test like this:
#pytest.mark.parametrize('function_input, expected',
[('user', 'user'),
pytest.param('intentional failure', 'intentional failure',
marks=pytest.mark.xfail(reason='testing incorrect table name fails'))])
def test_drop_table(test_drop_table, function_input, expected):
# Drop the table
test_drop_table(function_input)
# Check that it no longer exists
with pytest.raises(KeyError) as error_info:
test_table_commits(function_input, expected)
raise KeyError
assert error_info.type is KeyError
When I run this test I get this error:
self = <postgresdb.PostgresDb object at 0x7f4bbd87cc18>, table_name = 'user'
async def delete_table(self, table_name):
> table = self.meta.tables[table_name]
E KeyError: 'user'
I verified that this table can be dropped in the main script. I then recommit the table, verify it is present, and then try to drop it with the test but will continually receive a KeyError that the table is not present even though when checking the database that table is actually present.
I'm not sure what to test or adjust in the code to get Pytest working with this function. I appreciate any help!
I think for the first time it deletes the table named user, but the second input in pytest.mark.parametrize is also the name user, so it may be throwing error. If you need to test 2 different scenarios, it's better to have 2 different test functions. By doing this, you can have all your code under with pytest.raises(KeyError) as error_info in the 2nd test function.
Related
I'm using a Django database model to store objects corresponding to a remote ticket service. In testing this model, I'm running several tests to make sure that log messages work correctly for the database model - I'm using Django-nose to run these tests.
The database model looks something like this, using the JSONField from this StackOverflow answer with a small modification to support lists as well as dicts:
class TicketEntity(django.db.models.Model)
tickets = JSONField(null=True, blank=True, default=[], serialize=True)
queued_ticket_data = JSONField(null=True, blank=True, default=[], serialize=True)
...
#classmethod
def create(cls, *args, **kwargs):
instance = cls(*args, **kwargs)
instance.save()
return instance
def queue_output_for_ticket(self, log_data):
self.queued_ticket_data += [log_data]
self.save()
def push_ticket(self):
self.tickets += [self.queued_ticket_data]
self.queued_ticket_data = []
self.save()
return True
The tests (in the order they appear to get run) look like this:
def test_ticket_entity_push_ticket(self):
entity = TicketEntity.create()
entity.queue_output_for_ticket("log output")
entity.queue_output_for_ticket("more log output")
entity.push_ticket()
self.assertEquals(entity.tickets, [[log_data, log_data_1]])
self.assertFalse(entity.queued_ticket_data)
def test_ticket_entity_queue_output_for_ticket(self):
entity = TicketEntity.create()
log_data = "Here's some output to log.\nHere's another line of output.\nAnd another."
entity.queue_output_for_ticket(log_data)
self.assertEquals(entity.queued_ticket_data, [log_data])
The first test passes, perfectly fine. The second test fails on that assert statement, because the entity.queued_ticket_data looks like this:
["log output",
"more log output",
"Here's some output to log.\nHere's another line of output.\nAnd another."]
Those first two elements are there at the very start of the test, immediately after we call TicketEntity.create(). They shouldn't be there - the new instance should be clean because we're creating it from scratch.
Similarly, tickets is pre-filled as well. The TicketEntity has other fields that are not JSONFields, and they do not seem to exhibit this problem.
Can someone shed some light on why this problem might be happening, and what kind of modification we'd need for a fix?
Any idea how to do a conditional drop in Slick 3.0, to prevent An exception or error caused a run to abort: Unknown table 'MY_TABLE' if for some reason it doesn't exist?
def clear = {
val operations = DBIO.seq(
myTable.schema.drop,
// other table definitions
...
)
db.run(operations)
}
I went down the MTable route, but at least in Postgres it's a big hassle.
Try
def qDropSchema = sqlu"""drop table if exists your-table-name;""";
Watch out for case-sensitivity issues with the table name. I ran into odd problems with the postgres there - don't know about mysql.
Let me try to answer your question, I think you can first check the availability of the table using MTable then drop it if it exists. More or less like below:
import scala.slick.jdbc.meta._
if (MTable.getTables("table_name").list().isEmpty) {
//do something here..
}
I did this:
val personQuery = TableQuery[PersonTable]
val addressQuery = TableQuery[AddressTable]
...
val setupAction = DBIO.seq(
sqlu"SET FOREIGN_KEY_CHECKS = 0",
sqlu"DROP TABLE IF EXISTS #${personQuery.baseTableRow.tableName}",
sqlu"DROP TABLE IF EXISTS #${addressQuery.baseTableRow.tableName}",
sqlu"SET FOREIGN_KEY_CHECKS = 1",
)
val setupFuture = db.run(setupAction)
Note how you need to use #${} not ${} otherwise slick will fire off something like:
DROP TABLE IF EXISTS 'PERSON'
Which won't work
I am currently using the Slick framework in its 3.2.0 version.
The solution I am giving may apply to earlier version of the framework however but I did not verify this point.
If the only problem is to drop the table if it exists without throwing exception you can use the combinators of the Actions for this.
I have a series of test for which I run the create/populate/drop statement for each test on H2 in memory database.
I suppose you have two tables Canal and SubCanal (SubCanal has a foreign key on Canal so that you would like to drop it first if it exists) for wich you already have declared TableQuery variables such as:
lazy val canals = TableQuery[CanalTable]
lazy val subcanals = TableQuery[SubCanalTable]
// we don't put SubCanals to check if no exeption is produced and then
// add it for further testing.
lazy val ddl = canals.schema // ++ subcanals.schema
...I provided helper methods as follows:
def create: DBIO[Unit] = ddl.create
def drop: DBIO[Unit] = ddl.drop
def popCanal = canals ++= Seq(
Canal("Chat"),
Canal("Web"),
Canal("Mail"))
The above is just creating the action but what is cool is that Slick will attempt to drop the SubCanal table and the Canal table but will encapsulate the exception in the Try[...]. So this will run smoothly:
val db = Database.forConfig("yourBaseConfig")
val res = db.run(drop)
And this will run also:
val db = Database.forConfig("yourBaseConfig")
val res1 = db.run(
create >>
popCanal >>
canals.result
)
.... some interesting computation ...
val res2 = db.run(drop)
Note: The SubCanal scheme is still commented so has never been performed for the moment and the drop is however applied and fail to this table but does not raise the exeption.
More on combining actions (combinator):
DBIO Action Doc (3.2.0)
This book is free (but you may give some money) Essential Slick
I'm trying to call a stored procedure on a multi-db Django installation, but am not having any luck getting results. The stored procedure (which is on the secondary database) always returns an empty array in Django, but the expected result does appear when executed in a mysql client.
My view.py file
from SomeDBModel import models
from django.db import connection
def index(request, someid):
#Some related django-style query that works here
loc = getLocationPath(someid, 1)
print(loc)
def getLocationPath(id, someval):
cursor = connection.cursor()
cursor.callproc("SomeDB.spGetLocationPath", [id, someval])
results = cursor.fetchall()
cursor.close()
return results
I have also tried:
from SomeDBModel import models
from django.db import connections
def index(request, someid):
#Some related Django-style query that works here
loc = getLocationPath(someid, 1)
print(loc)
def getLocationPath(id, someval):
cursor = connections["SomeDB"].cursor()
cursor.callproc("spGetLocationPath", [id, someval])
results = cursor.fetchall()
cursor.close()
return results
Each time I print out the results, I get:
[]
Example of data that should be retrieved:
{
Path: '/some/path/',
LocalPath: 'S:\Some\local\Path',
Folder: 'SomeFolderName',
Code: 'SomeCode'
}
One thing I also tried was to print the result of cursor.callproc. I get:
(id, someval)
Also, printing the result of cursor._executed gives:
b'SELECT #_SomeDB.spGetLocationPath_arg1, #_SomeDB.spGetLocationPath_arg2'
Which seems to not have any reference to the stored procedure I want to run at all. I have even tried this as a last resort:
cursor.execute("CALL spGetLocationPath("+str(id)+","+str(someval)+")")
but I get an error about needing multi=True, but putting it in the execute() function doesn't seem to work like some sites have suggested, and I don't know where else to put it in Django.
So...any ideas what I missed? How can I get stored procedures to work?
These are the following steps that I took:
Made my stored procedure dump results into a temporary table so as to flatten the result set to a single result set. This got rid of the need for multi=True
In addition, I made sure the user at my IP address had access to call stored procedures in the database itself.
Finally, I continued to research the callproc function. Eventually someone on another site suggested the following code, which worked:
cur = connections["SomeDB"].cursor()
cur.callproc("spGetLocationPath", [id, someval])
res = next(cur.stored_results()).fetchall()
cur.close()
I'm new to Groovy and coding in general, but I've come a long way in a very short amount of time. I'm currently working in Confluence to create a tracking tool, which connects to a MySql Database. We've had some great success with this, but have hit a wall with using Groovy and the Run Macro.
Currently, we can use Groovy to populate fields within the Run Macro, which really works well for drop down options, example:
{groovy:output=wiki}
import com.atlassian.renderer.v2.RenderMode
def renderMode = RenderMode.suppress(RenderMode.F_FIRST_PARA)
def getSql = "select * from table where x = y"
def getMacro = '{sql-query:datasource=testdb|table=false} ${getSql} {sql-query}"
def get = subRenderer.render(getMacro, context, renderMode)
def runMacro = """
{run:id=test|autorun=false|replace=name::Name, type::Type:select::${get}|keepRequestParameters = true}
{sql:datasource=testdb|table=false|p1=\$name|p2=\$type}
insert into table1 (name, type) values (?, ?)
{sql}
{run}
"""
out.println runMacro
{groovy}
We've also been able to use Groovy within the Run Macro, example:
enter code here
{run:id=test|autorun=false|replace=name::Name, type::Type:select::${get}|keepRequestParameters = true}
{groovy}
def checkSql = "{select * from table where name = '\name' and type = '\$type'}"
def checkMacro = "{sql-query:datasource=testdb|table=false} ${checkSql} {sql-query}"
def check = subRenderer.render(checkMacro, context, renderMode)
if (check == "")
{
println("This information does not exist.")
} else {
println(checkMacro)
}
{groovy}
{run}
However, we can't seem to get both scenarios to work together, Groovy inside of a Run Macro inside of Groovy.
We need to be able to get the variables out of the Run Macro form so that we can perform other functions, like checking the DB for duplicates before inserting data.
My first thought is to bypass the Run Macro and create a simple from in groovy, but I haven't been too lucky with finding good examples. Can anyone help steer me in the right direction for creating a simple form in Groovy that would replace the Run Macro? Or have suggestions on how to get the rendered variables out of the Run Macro?
I am experimenting with ErlyDB in a non-erlyweb environment and I am not having much luck.
I have a 'Thing' table for testing, and a corresponding Thing module:
-module(thing).
-export([table/0, fields/0]).
table() ->
thing.
fields() ->
[name, value].
The module itself works - I can query the database fine using ([Thing] = thing:find({name, '=', "test"})).
When I try and save a new record, however things aren't so good.
I consistently see the following error:
mysql_conn:426: fetch <<"BEGIN">> (id <0.97.0>)
mysql_conn:426: fetch <<"INSERT INTO thing(value,name) VALUES ('vtha','blah')">> (id <0.97.0>)
mysql_conn:426: fetch <<"ROLLBACK">> (id <0.97.0>)
** exception exit: {{'EXIT',{badarg,[{erlang,hd,[[]]},
{erlydb_base,'-do_save/1-fun-0-',4},
{mysql_conn,do_transaction,2},
{mysql_conn,loop,1}]}},
{rollback_result,{updated,{mysql_result,[],[],0,[]}}}}
in function erlydb_base:do_save/1
in call from erlydb_base:hook/4
in call from test_thing:test/0
called as test_thing:test()
The table exists, the credentials work, and the SQL itself is fine, as I can execute the command directly on the database.
The code I am using to save is:
erlydb:start(mysql, Database),
Thing = thing:new(<<"hello">>, <<"world">>),
thing:save(Thing),
Is there something I am missing?
Is there some way of viewing some more helpful error messages from the database?
Looking at the source of erlydb_base, the exception happens when erlydb calls your thing module's db_pk_fields() function. That function should return a list, but apparently it doesn't.
I can confirm that altering the code in erlydb.erl fixes this problem (from this reference on the mailing list).
Change Line 561 of erlydb.erl from
lists:map(fun({_Name, _Atts} = F) -> F;
(Name) -> {Name, []}
end, lists:usort(DefinedFields)),
To:
lists:map(fun({_Name, _Atts} = F) -> F;
(Name) -> {Name, []}
end, DefinedFields),