Debugging ErlyDB and MySQL - mysql

I am experimenting with ErlyDB in a non-erlyweb environment and I am not having much luck.
I have a 'Thing' table for testing, and a corresponding Thing module:
-module(thing).
-export([table/0, fields/0]).
table() ->
thing.
fields() ->
[name, value].
The module itself works - I can query the database fine using ([Thing] = thing:find({name, '=', "test"})).
When I try and save a new record, however things aren't so good.
I consistently see the following error:
mysql_conn:426: fetch <<"BEGIN">> (id <0.97.0>)
mysql_conn:426: fetch <<"INSERT INTO thing(value,name) VALUES ('vtha','blah')">> (id <0.97.0>)
mysql_conn:426: fetch <<"ROLLBACK">> (id <0.97.0>)
** exception exit: {{'EXIT',{badarg,[{erlang,hd,[[]]},
{erlydb_base,'-do_save/1-fun-0-',4},
{mysql_conn,do_transaction,2},
{mysql_conn,loop,1}]}},
{rollback_result,{updated,{mysql_result,[],[],0,[]}}}}
in function erlydb_base:do_save/1
in call from erlydb_base:hook/4
in call from test_thing:test/0
called as test_thing:test()
The table exists, the credentials work, and the SQL itself is fine, as I can execute the command directly on the database.
The code I am using to save is:
erlydb:start(mysql, Database),
Thing = thing:new(<<"hello">>, <<"world">>),
thing:save(Thing),
Is there something I am missing?
Is there some way of viewing some more helpful error messages from the database?

Looking at the source of erlydb_base, the exception happens when erlydb calls your thing module's db_pk_fields() function. That function should return a list, but apparently it doesn't.

I can confirm that altering the code in erlydb.erl fixes this problem (from this reference on the mailing list).
Change Line 561 of erlydb.erl from
lists:map(fun({_Name, _Atts} = F) -> F;
(Name) -> {Name, []}
end, lists:usort(DefinedFields)),
To:
lists:map(fun({_Name, _Atts} = F) -> F;
(Name) -> {Name, []}
end, DefinedFields),

Related

SQLAlchemy ORM function works in main script but fails in Pytest

Using Postgres, SQLAlchemy 1.4 ORM, Python 3.7, Pytest.
I have a script in myproject/src/db.py and the tests for it are located in myproject/tests/.
In db.py I have a function to drop any given table, it works as expected:
async def delete_table(self, table_name):
table = self.meta.tables[table_name]
async with self.engine.begin() as conn:
await conn.run_sync(Base.metadata.drop_all(sync_engine, [table], checkfirst=True))
It gets called like:
asyncio.run(db.delete_table('user'))
In conftest.py I have a fixture like this:
#pytest.fixture(scope='function')
def test_drop_table():
def get_delete_tables(table_name):
return asyncio.run(DB.delete_table(table_name))
return get_delete_tables
In test.py I run the test like this:
#pytest.mark.parametrize('function_input, expected',
[('user', 'user'),
pytest.param('intentional failure', 'intentional failure',
marks=pytest.mark.xfail(reason='testing incorrect table name fails'))])
def test_drop_table(test_drop_table, function_input, expected):
# Drop the table
test_drop_table(function_input)
# Check that it no longer exists
with pytest.raises(KeyError) as error_info:
test_table_commits(function_input, expected)
raise KeyError
assert error_info.type is KeyError
When I run this test I get this error:
self = <postgresdb.PostgresDb object at 0x7f4bbd87cc18>, table_name = 'user'
async def delete_table(self, table_name):
> table = self.meta.tables[table_name]
E KeyError: 'user'
I verified that this table can be dropped in the main script. I then recommit the table, verify it is present, and then try to drop it with the test but will continually receive a KeyError that the table is not present even though when checking the database that table is actually present.
I'm not sure what to test or adjust in the code to get Pytest working with this function. I appreciate any help!
I think for the first time it deletes the table named user, but the second input in pytest.mark.parametrize is also the name user, so it may be throwing error. If you need to test 2 different scenarios, it's better to have 2 different test functions. By doing this, you can have all your code under with pytest.raises(KeyError) as error_info in the 2nd test function.

Best way to connect to MySQL and execute a query? (probably with Dapper)

I will preface with I simply could not get the Sql Type Provider to work - it threw a dozen different errors at points and seemed to be a version conflict. So I want to avoid that. I've been following mostly C# examples and can't always get the syntax right in F#.
I am targeting .NET6 (though can drop to 5 if it's going to be an issue).
I have modelled the data as a type as well.
I like the look of Dapper the best but I generally don't need a full ORM and would just like to run raw SQL queries so am open to other solutions.
I have a MySQL server running and a connection string.
I would like to
Initialize an SQL connection with my connection string.
Execute a query (preferably in raw SQL). If a select query, map it to my data type.
Be able to nearly execute more queries from elsewhere in the code without reinitializing a connection.
It's really just a package and a syntax example of those three things that I need. Thanks.
This is an example where I've used Dapper to query an MS SQL Express database. I have quite a lot of helper methods that I've made trough the years in order to make Dapper (and to a slight degree also SqlClient) easy and type safe in F#. Below you see just two of these helpers - queryMultipleAsSeq and queryMultipleToList.
I realize now that it's not that easy to get going with Dapper and F# unless these can be made available to others. I have created a repo on GitHub for this, which will be updated regularly with new helper functions and demos to show how they're used.
The address is https://github.com/BentTranberg/DemoDapperStuff
Ok, now this initial demo:
module DemoSql.Main
open System
open System.Data.SqlClient
open Dapper
open Dapper.Contrib
open Dapper.Contrib.Extensions
let queryMultipleAsSeq<'T> (conn: SqlConnection, sql: string, args: obj) : 'T seq =
conn.Query<'T> (sql, args)
let queryMultipleToList<'T> (conn: SqlConnection, sql: string, args: obj) : 'T list =
queryMultipleAsSeq (conn, sql, args)
|> Seq.toList
let connectionString = #"Server=.\SqlExpress;Database=MyDb;User Id=sa;Password=password"
let [<Literal>] tableUser = "User"
[<Table (tableUser); CLIMutable>]
type EntUser =
{
Id: int
UserName: string
Role: string
PasswordHash: string
}
let getUsers () =
use conn = new SqlConnection(connectionString)
(conn, "SELECT * FROM " + tableUser, null)
|> queryMultipleToList<EntUser>
[<EntryPoint>]
let main _ =
getUsers ()
|> List.iter (fun user -> printfn "Id=%d User=%s" user.Id user.UserName)
Console.ReadKey() |> ignore
0
The packages used for this demo:
<PackageReference Include="Dapper.Contrib" Version="2.0.78" />
<PackageReference Include="System.Data.SqlClient" Version="4.8.2" />
The Dapper.Contrib will drag along Dapper itself.

Django Call Stored Procedure on Second Database

I'm trying to call a stored procedure on a multi-db Django installation, but am not having any luck getting results. The stored procedure (which is on the secondary database) always returns an empty array in Django, but the expected result does appear when executed in a mysql client.
My view.py file
from SomeDBModel import models
from django.db import connection
def index(request, someid):
#Some related django-style query that works here
loc = getLocationPath(someid, 1)
print(loc)
def getLocationPath(id, someval):
cursor = connection.cursor()
cursor.callproc("SomeDB.spGetLocationPath", [id, someval])
results = cursor.fetchall()
cursor.close()
return results
I have also tried:
from SomeDBModel import models
from django.db import connections
def index(request, someid):
#Some related Django-style query that works here
loc = getLocationPath(someid, 1)
print(loc)
def getLocationPath(id, someval):
cursor = connections["SomeDB"].cursor()
cursor.callproc("spGetLocationPath", [id, someval])
results = cursor.fetchall()
cursor.close()
return results
Each time I print out the results, I get:
[]
Example of data that should be retrieved:
{
Path: '/some/path/',
LocalPath: 'S:\Some\local\Path',
Folder: 'SomeFolderName',
Code: 'SomeCode'
}
One thing I also tried was to print the result of cursor.callproc. I get:
(id, someval)
Also, printing the result of cursor._executed gives:
b'SELECT #_SomeDB.spGetLocationPath_arg1, #_SomeDB.spGetLocationPath_arg2'
Which seems to not have any reference to the stored procedure I want to run at all. I have even tried this as a last resort:
cursor.execute("CALL spGetLocationPath("+str(id)+","+str(someval)+")")
but I get an error about needing multi=True, but putting it in the execute() function doesn't seem to work like some sites have suggested, and I don't know where else to put it in Django.
So...any ideas what I missed? How can I get stored procedures to work?
These are the following steps that I took:
Made my stored procedure dump results into a temporary table so as to flatten the result set to a single result set. This got rid of the need for multi=True
In addition, I made sure the user at my IP address had access to call stored procedures in the database itself.
Finally, I continued to research the callproc function. Eventually someone on another site suggested the following code, which worked:
cur = connections["SomeDB"].cursor()
cur.callproc("spGetLocationPath", [id, someval])
res = next(cur.stored_results()).fetchall()
cur.close()

how NamedParameterJdbcTemplate.update really works with Spring and MySQL

Ok, I've probably dug up the entire Google land and still couldn't find anything that could possibly answer my question.
I have my little foo method that does some deleting like this:
private void foo()
{
jdbcNamedParameterTemplate.update(sqlString, params); //1
jdbcNamedParameterTemplate.update(sqlString2, params2); //2
}
sqlString and sqlString2 are just delete statements like "Delete * from FooBar".
So when I get to the second call to update, do I have any guarantee that whatever operation the first one invokes in the database has already finished?
If you do that two in one session, and non multithreading, then yes the first one invokes in the database has already finished before the second update.
But if not in the same session you can check the version to check if the object already changed or not
int oldVersion = foo.getVersion();
session.load( foo, foo.getKey() ); // load the current state
if ( oldVersion != foo.getVersion()) { .... }// if true then the object has been changed

lua : passing parameter to other function problem

not sure did anyone ever face this kind of problem. here is my code
in main.lua :
local highScore = require("highScore")
local username = "myName"
local finishedTime = 12345
highScore:InsertHighScore(userName, finishedTime)
in highScore.lua
function InsertHighScore(name,time)
print(name)
print(time)
-- other code
end
it look simple and shouldn't be wrong, but in my console out put it show :
table: 0x19e6340
myName
after a day of testing, i found that before the 2 parameter that i pass, it actually passing another table to me, so do these changes on highScore.lua:
function InsertHighScore(table,name,time)
print(table)
print(name)
print(time)
-- other code
end
so now my "other code" can work nicely, but why it pass me a table before my parameter ?
In Lua, a call to an object/table with a colon instead of a dot indicates that the object/table should be passed into the function as the first parameter (e.g, as a self). If you don't care about that, then call the function with a dot instead:
highScore.InsertHighScore(userName, finishedTime)