Just wondering if anyone could give a working example of using the erlang-mysql module (http://code.google.com/p/erlang-mysql-driver/).
I am new to erlang and I am trying to replace some old scripts with a few erlang batch processes. I am able to connect to the DB and even complete a query, but I am not sure how I use the results. Here is what I currently have:
-include("../include/mysql.hrl").
...
mysql:start_link(p1, "IP-ADDRESS", "erlang", "PASSWORD", "DATABASE"),
Result1 = mysql:fetch(p1, <<"SELECT * FROM users">>),
io:format("Result1: ~p~n", [Result1]),
...
I also have a prepared statement that I am also using to get just one row (if it exists) and it would be helpful to know how to access the results on that as well
This is described in the source code of mysql.erl:
Your result will be {data, MySQLRes}.
FieldInfo = mysql:get_result_field_info(MysqlRes), where FieldInfo is a list of {Table, Field, Length, Name} tuples.
AllRows = mysql:get_result_rows(MysqlRes), where AllRows is a list of lists, each representing a row.
you should check the count of rows,
then execute:
eg:
RowLen = erlang:length(Row),
if
RowLen > 0 ->
{success};
true ->
{failed, "Row is null"}
end.
After trying to use the ODBC module that comes with Erlang/OTP, and running into problems, I recommend the mysql/otp driver. I replaced ODBC with it in just a few hrs and it works fine.
They have good documentation so I will not add examples here.
Related
I am in the process of migrating databases from sqlite to mysql. Now that I've migrated the data to mysql, I'm not able to use my sqlalchemy code (in Python3) to access it in the new mysql db. I was under the impression that sqlalchemy syntax was database agnostic (i.e. the same syntax would work for accessing sqlite and mysql), but this appears not to be the case. So my question is: Is it absolutely required to use a DBAPI in addition to Sqlalchemy to read the data? Do I have to edit all of my sqlalchemy code to now read mysql?
The documentation says: The MySQL dialect uses mysql-python as the default DBAPI. There are many MySQL DBAPIs available, including MySQL-connector-python and OurSQL, which I think means that I DO need a DBAPI.
My old code with sqlite successfully worked like this with sqlite:
engine = create_engine('sqlite:///pmids_info.db')
def connection():
conn = engine.connect()
return conn
def load_tables():
metadata = MetaData(bind=engine) #init metadata. will be empty
metadata.reflect(engine) #retrieve db info for metadata (tables, columns, types)
inputPapers = Table('inputPapers', metadata)
return inputPapers
inputPapers = load_tables()
def db_inputPapers_retrieval(user_input):
result = engine.execute("select title, author, journal, pubdate, url from inputPapers where pmid = :0", [user_input])
for row in result:
title = row['title']
author = row['author']
journal = row['journal']
pubdate = row['pubdate']
url = row['url']
apa = str(author+' ('+pubdate+'). '+title+'. '+journal+'. Retrieved from '+url)
return apa
This worked fine and dandy. So then I tried to update it to work with the mysql db like this:
engine = create_engine('mysql://snarkshark#localhost/pmids_info')
At first when I tried to run my sample code like this, it complained because I didn't have MySqlDB. Some googling around informed me that MySqlDB does NOT work for Python 3. So then I tried pip installing pymysql and changing my engine statement to
engine = create_engine('mysql+pymysql://snarkshark#localhost/pmids_info')
which also ends up giving me various syntax errors when I try to adjust things.
So what I want to know, is if there is any way I can get my current syntax to work with mysql? Since the syntax is from sqlalchemy, I thought it would work perfectly for the exact same data in mysql that was previously in sqlite. Will I have to go through and update ALL of my db functions to use the syntax of the DBAPI?
This will sound like a dumb answer, but you'll need to change all the places where you're using database-specific behavior. SQLAlchemy does not guarantee that anything you do with it is portable across all backends. It leaks some abstractions on purpose to allow you to do things that are only available on certain backends. What you're doing is like using Python because it's cross-platform, then doing a bunch of os.fork()s everywhere, and then being surprised that it doesn't work on Windows.
For your specific case, at a minimum, you need to wrap all your raw SQL in text() so that you're not affected by the supported paramstyle of the DBAPI. However, there are still subtle differences between different dialects of SQL, so you'll need to use the SQLAlchemy SQL expression language instead of raw SQL if you want portability. After all that, you'll still need to be careful not to use backend-specific features in the SQL expression language.
There are few example out there but non of them are very clarified (or on old version).
I want to call MySQL procedure and check the return status (in rails 4.2). The most common method I saw is to call result = ActiveRecord::Base.connection.execute("call example_proc()"), but in some places people wrote there is prepared method result = ActiveRecord::Base.connection.execute_procedure("Stored Procedure Name", arg1, arg2) (however it didn't compiled).
So what is the correct way to call and get the status for MySQL procedure?
Edit:
And how to send parameters safly, where the first parameter is integer, second string and third boolean?
Rails 4 ActiveRecord::Base doesn't support execute_procedure method, though result = ActiveRecord::Base.connection still works. ie
result = ActiveRecord::Base.connection.execute("call example_proc('#{arg1}','#{arg2}')")
You can try Vishnu approach below
or
You can also try
ActiveRecord::Base.connections.exec_query("call example_proc('#{arg1}','#{arg2}')")
here is the document
In general, you should be able to call stored procedures in a regular where or select method for a given model:
YourModel.where("YOUR_PROC(?, ?)", var1, var2)
As for your comment "Bottom line I want the most correct approach with procedure validation afterwards (for warnings and errors)", I guess it always depends on what you actually want to implement and how readable you want your code to be.
For example, if you want to return rows of YourModel attributes, then it probably would be better if you use the above statement with where method. On the other hand, if you write some sql adapter then you might want to go down to the ActiveRecord::Base.connection.execute level.
BTW, there is something about stored proc performance that should be mentioned here. In several databases, database does stored proc optimization on the first run of the stored proc. However, the parameters that you pass to that first run might not be those that will be running on it more frequently later on. As a result, your stored-proc might be auto-optimized in a "none-optimal" way for your case. It may or may not happen this way, but it is something that you should consider while using stored procs with dynamic params.
I believe you have tried many other solutions and got some or other errors mostly "out of sync" or "closed connection" errors. These errors occur every SECOND time you try to execute the queries. We need to workaround like the connection is new every time to overcome this. Here is my solution that didn't throw any errors.
#checkout a connection for Model
conn = ModelName.connection_pool.checkout
#use the new connection to execute the query
#records = conn.execute("call proc_name('params')")
#checkout the connection
ModelName.connection_pool.checkin(conn)
The other approaches failed for me, possibly because ActiveRecord connections are automatically handled to checkout and checking for each thread. When our method tries to checkout a connection just to execute the SP, it might conflict since there will be an active connection just when the method started.
So the idea is to manually #checkout a connection for the model instead of for thread/function from the pool and #checkin once the work is done. This worked great for me.
I am using the RJDBC package to connect to a MySQL (Maria DB) database in R on a Windows 7 machine and I am trying a statement like
select a as b
from table
but the column will always continue to be named "a" in the data frame.
This works normally with RODBC and RMySQL but doesn't work with RJDBC. Unfortunately, I have to use RJDBC as this is the only package that has no problem with the encoding of chinese, hebrew and so on letters (set names and so on don't seem to work with RODBC and RMySQL).
Has anybody experienced this problem?
I have run into the same frustrating issue. Sometimes the AS keyword would have its intended effect, but other times it wouldn't. I was unable to identify the conditions to make it work correctly.
Short Answer: (Thanks to Simon Urbanek (package maintainer for RJDBC), Yev, and Sebastien! See the Long Answer.) One thing that you may try is to open your JDBC connection using ?useOldAliasMetadataBehavior=true in your connection string. Example:
drv <- JDBC("com.mysql.jdbc.Driver", "C:/JDBC/mysql-connector-java-5.1.18-bin.jar", identifier.quote="`")
conn <- dbConnect(drv, "jdbc:mysql://server/schema?useOldAliasMetadataBehavior=true", "username", "password")
query <- "SELECT `a` AS `b` FROM table"
result <- dbGetQuery(conn, query)
dbDisconnect(conn)
This ended up working for me! See more details, including caveats, in the Long Answer.
Long Answer: I tried all sorts of stuff, including making views, changing queries, using JOIN statements, NOT using JOIN statements, using ORDER BY and GROUP BY statements, etc. I was never able to figure out why some of my queries were able to rename columns and others weren't.
I contacted the package maintainer (Simon Urbanek.) Here is what he said:
In the vast majority of cases this is an issue in the JBDC driver, because there is really not much RJDBC can do other than to call the driver.
He then recommended that I make sure I had the most recent JDBC driver for MySQL. I did have the most recent version. However, it got me thinking "maybe it IS a bug with the JDBC driver." So, I searched Google for: mysql jdbc driver bug alias.
The top result for this query was an entry at bugs.mysql.com. Yev, using MySQL 5.1.22, says that when he upgraded from driver version 5.0.4 to 5.1.5, his column aliases stopped working. Asked if it was a bug.
Sebastien replied, "No, it's not a bug! It's a documented change of behavior in all subsequent versions of the driver." and suggested using ?useOldAliasMetadataBehavior=true, citing documentation for the JDBC driver.
Caveat Lector: The documentation for the JDBC driver states that
useColumnNamesInFindColumn is preferred over useOldAliasMetadataBehavior unless you need the specific behavior that it provides with respect to ResultSetMetadata.
I haven't had the time to fully research what this means. In other words, I don't know what all of the ramifications are of using useOldAliasMetadataBehavior=true are. Use at your own risk. Does someone else have more information?
I don't know RJDBC, but in some cases when it is necessary to give permanent aliases to columns without renaming them, you can use VIEWs
CREATE OR REPLACE VIEW v_table AS
SELECT a AS b
FROM table
... and then ...
SELECT b FROM v_table
There is a separate function in the ResultSetMetaData interface for retrieving the column label vs the column name:
String getColumnLabel(int column) throws SQLException;
Gets the designated column's suggested title for use in printouts and
displays. The suggested title is usually specified by the SQL AS
clause. If a SQL AS is not specified, the value returned
fromgetColumnLabel will be the same as the value returned by the
getColumnName method.
Using getColumnLabel should resolve this issue (if not, check that your JDBC driver is following this spec).
e.g.
ResultSetMetaData rsmd = rs.getMetaData();
int columnCount = rsmd.getColumnCount();
while(rs.next()) {
for (int i = 1; i < columnCount + 1; i++) {
String label = rsmd.getColumnLabel(i);
System.out.println(rs.getString(label));
}
}
This is the work around we use for R and SAP HANA via RJDBC:
names(result)[1]<-"b"
It's not the nicest work around, but since Aaron's solution does work for us, we went with this "solution".
I'm trying to create a text file that contains the value of a custom field I added on redmine. I tried to get it from an SQL query in the create method of the project_controller.rb (at line 80 on redmine 1.2.0) as follows :
sql = Mysql.new('localhost','root','pass','bitnami_redmine')
rq = sql.query("SELECT value
FROM custom_values
INNER JOIN projects
ON custom_values.customized_id=projects.id
WHERE custom_values.custom_field_id=7
AND projects.name='#{#project.name}'")
rq.each_hash { |h|
File.open('pleasework.txt', 'w') { |myfile|
myfile.write(h['value'])
}
}
sql.close
This works fine if I test it in a separate file (with an existing project name instead of #project.name) so it may be a syntax issue but I can't find what it is. I'd also be glad to hear any other solution to get that value.
Thanks !
(there's a very similar post here but none of the solutions actually worked)
First, you could use Project.connection.query instead of your own Mysql instance. Second, I would try to log the SQL RAILS_DEFAULT_LOGGER.info "SELECT ..." and check if it's ok... And the third, I would use identifier instead of name.
I ended up simply using params["project"]["custom_field_values"]["x"] where x is the custom field's id. I still don't know why the sql query didn't work but well, this is much simpler and faster.
I am running a script that stores different datasets to a MySQL database. This works so far, but only sequentially. e.g.:
# write table1
replaceTable(con,tbl="table1",dframe=dframe1)
# write table2
replaceTable(con,tbl="table2",dframe=dframe2)
If I select both (I use StatET / Eclipse) and run the selection, I get an error:
Error in function (classes, fdef, mtable) :
unable to find an inherited method for function "dbWriteTable",
for signature "MySQLConnection", "data.frame", "data.frame".
I guess this has to do with the fact that my con is still busy or so when the second request is started. When I run the script line after line it just works fine. Hence I wonder, how can I tell R to wait til the first request is ready and then go ahead ? How can I make R scripts interactive (just console like plot examples - no tcl/tk).
EDIT:
require(RMySQL)
replaceTable <- function(con,tbl,dframe){
if(dbExistsTable(con,tbl)){
dbRemoveTable(con,tbl)
dbWriteTable(con,tbl,dframe)
cat("Existing database table updated / overwritten.")
}
else {
dbWriteTable(con,tbl,dframe)
cat("New database table created")
}
}
dbWriteTable has two important arguments:
overwrite: a logical specifying whether to overwrite an existing table
or not. Its default is ‘FALSE’.
append: a logical specifying whether to append to an existing table
in the DBMS. Its default is ‘FALSE’.
For past project I have successfully achieve appending, overwriting, creating, ... of tables with proper combinations of these.