I am currently on the issue to connect to a Cube with R using .COM objects, to then gathering data from the Cube via mdx-queries. As in my previous question described (see link below), I can now connect to the cube with the help of the RDCOMClient package and R version 3.3.1, and can also send queries to the cube.
Moreover, when tracing my connection with the SQL Server Profiler, I can see it connected correctly + I also see that my query is executed without errors.
However, I have no idea how I can obtain my data inside R. I save the query result in the variable results, but I am unable to do anything with it. Can you help me display my query results in R please?
.
Connection + Query Code:
conn = COMCreate("ADODB.Connection")
connStr = 'my connection string'
conn[["ConnectionString"]] = connStr
conn$Open()
conn[["State"]]
query = 'some query. 100% correct, tested with other tools'
results = conn$Execute(query)
.
Information for the results variable: (Code, followed by the Output)
names = slotNames(results)
names
[1] "ref"
.
slot(results,names[1])
pointer: 0x0000000015d63c60
.
str(results)
Formal class 'COMIDispatch' [package "RDCOMClient"] with 1 slot
..# ref:
.
class(results)
[1] "COMIDispatch"
attr(,"package")
[1] "RDCOMClient"
.
attributes(results)
$ref
$class
[1] "COMIDispatch"
attr(,"package")
[1] "RDCOMClient"
.
Thanks for helping :-)
.
Previous question: R & COM-Objects: How to connect to a OLAP cube on Windows
Consider using ADO's GetRows() method which returns records of a recordset in a nested VBA array which would translate into a nested R list. Currently, you only retrieve the recordset object.
results = conn$Execute(query)$GetRows()
Related
I'm creating a shiny app in which I need to create a plot from the data returned by the sql query. Now I'm trying to do this by creating a dataframe and storing the value in it. When I run this shiny app it gives me an error cannot coerce class "structure("MySQLResult", package = "RMySQL")" to a data.frame
How can I store the database query result in a dataframe.
If you work with dplyr, you can use dplyr::collect() to save query results into the data frame. Please visit RStudio website on working with databases to see more ways to do it.
I am not sure I got your question right so correct me if I am wrong.
This is what am thinnking:
frame <- dbGetQuery(con, statement= paste("select col1
from table1")) `
con is your dB connection.
Convert year into a dataframe:
year_new<-data.frame(year)
If I have not answered your question, let me know.
Also kindly post how you are doing it so it will be easier to understand what the problem is.`
I am fairly new to Python and MySQL. I am writing code that queries 60 different tables each containing records for each second in a five minute period. The code executes every five minutes. A few of the queries can reach 1/2 MB of data but most are in the 50 KB range. I am running on a workstation running Windows 7,64-bit using MySQL Connector/Python. I am testing my code using PowerShell windows but the code will eventually run as a scheduled task. The workstation has plenty of RAM (8 GB). Other processes are running but according to the Task Manager only half of memory is being used. Mostly everything performs as expected but sometimes processing hangs. I have inserted print statements in the code (I've also used debugger tracing) to determine where the hang occurs. It is occurring on a call to fetchall. Below is the germane parts of the code. All CAPS are (pseudo)constants.
mncdb = mysql.connector.connect(
option_files=ENV_MCG_MYSQL_OPTION_FILE,
option_groups=ENV_MCG_MYSQL_OPTION_GROUP,
host=ut_get_workstation_hostname(),
database=ENV_MNC_DATABASE_NAME
)
for generic_table_id in DBR_TABLE_INDEX:
site_table_id = DBR_SITE_TABLE_NAMES[site_id][generic_table_id]
db_cursor = mncdb.cursor()
db_command = (
"SELECT *"
+" FROM "
+site_table_id
+" WHERE "
+DBR_DATETIME_FIELD
+" >= '"
+query_start_time+"'"
+" AND "
+DBR_DATETIME_FIELD
+" < '"
+query_end_time+"'"
)
try:
db_cursor.execute(db_command)
print "selected data for table "+site_table_id
try:
table_info = db_cursor.fetchall()
print "extracted data for table "+site_table_id
except:
print "DB exception "+formatExceptionInfo()
print "FETCH failed to return any rows..."
table_info = []
raise
except:
print "uncaught DB error "+formatExceptionInfo()
raise
.
.
.
other processing that uses the data
.
.
.
db_cursor.close()
mncdb.close()
.
.
.
No exceptions are being raised. In a separate PowerShell window I can access the data being processed by the code. For my testing all data in the database is loaded before the code is executed. No processes are updating the database while the code is being tested. The hanging can occur on the first execution of the code or after several hours of execution.
My question is what could be causing the code to hang on the fetchall statement?
You can alleviate this by setting the fetch size:
mncdb = mysql.connector.connect(option_files=ENV_MCG_MYSQL_OPTION_FILE, option_groups=ENV_MCG_MYSQL_OPTION_GROUP,host=ut_get_workstation_hostname(,database=ENV_MNC_DATABASE_NAME, cursorclass = MySQLdb.cursors.SSCursor)
But before you do this, you should also use the mysql excuse for prepared statements instead of string concatenation when building your statement.
Hanging could involve the MySQL tables themselves and not specifically the Python code. Do they contain many records? Are they very wide tables? Are they indexed on the datetime_field?
Consider various strategies:
Specifically select the needed columns instead of the asterisk, calling all columns.
Index on the DBR_DATETIME_FIELD being used in the where clause (i.e., implicit join).
Diagnose further with printed timers print(datetime.datetime.now()) to see which are the bottleneck tables. In doing so, be sure to import the datetime module.
I am trying to return a date selected from date picker in to my sql query in my python code. I also tried using encode(utf-8) to remove the unicode string but still, I am getting the error.
I am new to python. Can anyone please help me figure out how to solve this problem? I am using python flask to create the webpage
if request.method=='POST':
dateval2 = request.form['datepick']
dateval = dateval2.encode('utf-8')
result = ("SELECT * FROM OE_TAT where convert(date,Time_IST)='?'",dateval
df = pd.read_sql_query(result,connection)`
Error:
pandas.io.sql.DatabaseError
DatabaseError: Execution failed on sql '("SELECT * FROM OE_TAT where convert(date,Time_IST)='?'", '2015-06-01')': The first argument to execute must be a string or unicode query.
You are providing a tuple to read_sql_query, while the first argument (the query) has to be a string. That's why it gives the error "The first argument to execute must be a string or unicode query".
You can pass the parameter like this:
result = "SELECT * FROM OE_TAT where convert(date,Time_IST)=?"
df = pd.read_sql_query(result, connection, params=(dateval,))
Note that the use of ? depends on the driver you are using (there are different ways to specify parameters, see https://www.python.org/dev/peps/pep-0249/#paramstyle). It is possible you will have to use %s instead of ?.
You could also format the string in beforehand, like result = "SELECT * FROM OE_TAT where convert(date,Time_IST)={0}".format(dateval), however, this is not recommended, see eg here
I'm trying to get a listing of all tables in an Access database using Matlab.
I'm so far using an actxobject and can successfull run queries against the database, but all methods I've read here have failed.
I consistently get the error message 'No read permission on MSysObjects'. The query runs fine within the Access-program, but the implementation of my program does not allow me to store the query there.
So, my question is: Is there any way to list all the tables of an Access database through Matlab?
Consider this code:
conn = actxserver('ADODB.Connection');
connString = 'Provider=Microsoft.Jet.OLEDB.4.0;Data Source=Nwind.mdb';
conn.Open(connString);
rs = conn.OpenSchema('adSchemaTables').GetRows;
tableNames = rs(3, ismember(rs(4,:),'TABLE') );
and the result is:
>> tableNames'
ans =
'Categories'
'Customers'
'Employees'
'Order Details'
'Orders'
'Products'
'Shippers'
'Suppliers'
Just wondering if anyone could give a working example of using the erlang-mysql module (http://code.google.com/p/erlang-mysql-driver/).
I am new to erlang and I am trying to replace some old scripts with a few erlang batch processes. I am able to connect to the DB and even complete a query, but I am not sure how I use the results. Here is what I currently have:
-include("../include/mysql.hrl").
...
mysql:start_link(p1, "IP-ADDRESS", "erlang", "PASSWORD", "DATABASE"),
Result1 = mysql:fetch(p1, <<"SELECT * FROM users">>),
io:format("Result1: ~p~n", [Result1]),
...
I also have a prepared statement that I am also using to get just one row (if it exists) and it would be helpful to know how to access the results on that as well
This is described in the source code of mysql.erl:
Your result will be {data, MySQLRes}.
FieldInfo = mysql:get_result_field_info(MysqlRes), where FieldInfo is a list of {Table, Field, Length, Name} tuples.
AllRows = mysql:get_result_rows(MysqlRes), where AllRows is a list of lists, each representing a row.
you should check the count of rows,
then execute:
eg:
RowLen = erlang:length(Row),
if
RowLen > 0 ->
{success};
true ->
{failed, "Row is null"}
end.
After trying to use the ODBC module that comes with Erlang/OTP, and running into problems, I recommend the mysql/otp driver. I replaced ODBC with it in just a few hrs and it works fine.
They have good documentation so I will not add examples here.