Get Redmine custom field value to a file - mysql

I'm trying to create a text file that contains the value of a custom field I added on redmine. I tried to get it from an SQL query in the create method of the project_controller.rb (at line 80 on redmine 1.2.0) as follows :
sql = Mysql.new('localhost','root','pass','bitnami_redmine')
rq = sql.query("SELECT value
FROM custom_values
INNER JOIN projects
ON custom_values.customized_id=projects.id
WHERE custom_values.custom_field_id=7
AND projects.name='#{#project.name}'")
rq.each_hash { |h|
File.open('pleasework.txt', 'w') { |myfile|
myfile.write(h['value'])
}
}
sql.close
This works fine if I test it in a separate file (with an existing project name instead of #project.name) so it may be a syntax issue but I can't find what it is. I'd also be glad to hear any other solution to get that value.
Thanks !
(there's a very similar post here but none of the solutions actually worked)

First, you could use Project.connection.query instead of your own Mysql instance. Second, I would try to log the SQL RAILS_DEFAULT_LOGGER.info "SELECT ..." and check if it's ok... And the third, I would use identifier instead of name.

I ended up simply using params["project"]["custom_field_values"]["x"] where x is the custom field's id. I still don't know why the sql query didn't work but well, this is much simpler and faster.

Related

SAS pass through - Extract from MySQL does not work

I'm trying to build a Data Integration job uses pass through to extract data from a view in a MySQL database.
Wev'e been using pass through a lot in the project, mostly extracting data from Redshift,
however with MySQL I was not able to do make it work properly.
It keeps complaining a table is missing even though when pass through is off, view is found and data is extracted...
tried every trick I know, starting from enabling case-sensitive DBMS object names, to manually remove single/double quotes from the statement just in case MySQL confuses confuses it with something else...
No luck.
ODBC driver is [MySQL][ODBC 5.3(a) Driver][mysqld-5.5.53].
Ran on a Windows environment.
Any idea how to solve this?
Thank you in advance.
EDIT
So, first of all, one correction (even though not that important - I extract from a view, not a table).
This is the code generated by SAS Create Table transformation, pass through enabled. I only put an asterisk instead of the full list of columns:
proc sql;
connect to ODBC
(
READBUFF=10000 DATASRC="cmp.web_api" AUTHDOMAIN="MYSQL_CMP_Auth"
);
create table work."W7ZZZKOC"n as
select
*
from connection to ODBC
(
select
V_BI_ACCOUNT.ACCOUNT_NAME,
V_BI_ACCOUNT.ACQUISITION_SOURCE__C,
V_BI_ACCOUNT.ZUORA__ACTIVE__C,
V_BI_ACCOUNT.ADDRESS_LINE_1__C,
V_BI_ACCOUNT.ADDRESS_LINE_2__C,
V_BI_ACCOUNT.ADDRESS_LINE_3__C,
V_BI_ACCOUNT.AGREEMENT_DATE,
V_BI_ACCOUNT.AGREEMENT_LEGAL_CLAUSE_1__C,
V_BI_ACCOUNT.AGREEMENT_LEGAL_CLAUSE_2__C,
V_BI_ACCOUNT.PERSONBIRTHDATE,
V_BI_ACCOUNT.BLOCKED_REASON__C,
V_BI_ACCOUNT.BRAND__C,
V_BI_ACCOUNT.CPN__C,
V_BI_ACCOUNT.ACCCREATEDBYID,
V_BI_ACCOUNT.ACCCREATEDDATE,
V_BI_ACCOUNT.CURRENCY_PREFERENCE__C,
V_BI_ACCOUNT.CUSTOMER_FULL_NAME__PC,
V_BI_ACCOUNT.ACCOUNTID,
V_BI_ACCOUNT.ZUORA__CUSTOMERPRIORITY__C,
V_BI_ACCOUNT.DELIVERY_SALUTATION__C,
V_BI_ACCOUNT.DISPLAY_NAME,
V_BI_ACCOUNT.PERSONEMAIL,
V_BI_ACCOUNT.EMAILKEY__C,
V_BI_ACCOUNT.FACEBOOKKEY,
V_BI_ACCOUNT.FIRSTNAME,
V_BI_ACCOUNT.GENDER__C,
V_BI_ACCOUNT.PHONE,
V_BI_ACCOUNT.ACCLASTACTIVITYDATE,
V_BI_ACCOUNT.ACCLASTMODIFIEDDATE,
V_BI_ACCOUNT.LASTNAME,
V_BI_ACCOUNT.OTHER_EMAIL__C,
V_BI_ACCOUNT.PI_TYPE__C,
V_BI_ACCOUNT.ACCPARENTID,
V_BI_ACCOUNT.POSTCODE__C,
V_BI_ACCOUNT.PRIMARY_ACCOUNT_OF_THIS_CUSTOMER,
V_BI_ACCOUNT.ACCPRIMARY__C,
V_BI_ACCOUNT.ACCREASON_FOR_STATUS__C,
V_BI_ACCOUNT.ZUORA__SLA__C,
V_BI_ACCOUNT.ZUORA__SLASERIALNUMBER__C,
V_BI_ACCOUNT.SALUTATION,
V_BI_ACCOUNT.ACCSYSTEMMODSTAMP,
V_BI_ACCOUNT.PERSONTITLE,
V_BI_ACCOUNT.ZUORA__UPSELLOPPORTUNITY__C,
V_BI_ACCOUNT.X_CODE__C,
V_BI_ACCOUNT.ZUORA__ACCOUNT_ID__C,
V_BI_ACCOUNT.ZUORA__PAYMENTMETHODID__C,
V_BI_ACCOUNT.CITY,
V_BI_ACCOUNT.ORIGINAL_CREATED_DATE,
V_BI_ACCOUNT.SOURCE_SYSTEM_ID,
V_BI_ACCOUNT.STATUS,
V_BI_ACCOUNT.ZUORA__CONTACT_ID,
V_BI_ACCOUNT.ACCISDELETED,
V_BI_ACCOUNT.BILLING_ACCOUNT_NAME,
V_BI_ACCOUNT.ACZCREATEDDATE,
V_BI_ACCOUNT.ACZSYSTEMMODSTAMP,
V_BI_ACCOUNT.ACZLASTACTIVITYDATE,
V_BI_ACCOUNT.ZUORA__ACCOUNT__C,
V_BI_ACCOUNT.ZUORA__ACCOUNTNUMBER__C,
V_BI_ACCOUNT.ZUORA__AUTOPAY__C,
V_BI_ACCOUNT.ZUORA__BALANCE__C,
V_BI_ACCOUNT.ZUORA__CREDITCARDEXPIRATION__C,
V_BI_ACCOUNT.ZUORA__CURRENCY__C,
V_BI_ACCOUNT.ZUORA__MRR__C,
V_BI_ACCOUNT.ZUORA__PAYMENTTERM__C,
V_BI_ACCOUNT.ZUORA__PURCHASEORDERNUMBER__C,
V_BI_ACCOUNT.ZUORA__LASTINVOICEDATE__C,
V_BI_ACCOUNT.COUNTRY_NAME,
V_BI_ACCOUNT.COUNTRY_CODE,
V_BI_ACCOUNT.FAVOURITE_FOOTBALL_CLUB,
V_BI_ACCOUNT.COUNTY
from
web_api.V_BI_ACCOUNT as V_BI_ACCOUNT
);
%rcSet(&sqlrc);
disconnect from ODBC;
quit;
And again, when I extract data without pass through - works successfully,
I found out the problem was a column name exceeds 32 positions.
As SAS supports up column names up to 32,
the query fails to find PRIMARY_ACCOUNT_OF_THIS_CUSTOMER as the original column name is PRIMARY_ACCOUNT_OF_THIS_CUSTOMER__C.
EDIT
One more thing I found out is, MySQL doesn't like specifying schema name nor aliases.
Therefore,
From clause to only specify table name i.e : 'from v_bi_account' rather than 'web_api.v_bi_account'
and do not use aliases i.e use 'from v_bi_account' rather than 'from v_bi_account as v_bi_account'
Thank you guys so much for your help.

JSON Queries - Failed to execute

So, I am trying to execute a query using ArcGIS API, but it should match any Json queries. I am kind of new to this query format, so I am pretty sure I must be missing something, but I can't figure out what it is.
This page allows for testing queries on the database before I actually implement them in my code. Features in this database have several fields, including OBJECTID and Identificatie. I would like to, for example, select the feature where Identificatie = 1. If I enter this in the Where field though (Identificatie = 1) an error Failed to execute appears. This happens for every field, except for OBJECTID. Querying where OBJECTID = 1 returns the correct results. I am obviously doing something wrong, but I don't get it why OBJECTID does work here. A brief explanation (or a link to a page documenting queries for JSON, which I haven't found), would be appreciated!
Identificatie, along with most other fields in the service you're using, is a string field. Therefore, you need to use single quotes in your WHERE clause:
Identificatie = '1'
Or to get one that actually exists:
Identificatie = '1714100000729432'
OBJECTID = 1 works without quotes because it's a numeric field.
Here's a link to the correct query. And here's a link to the query with all output fields included.

CakePHP Error: SQLSTATE[42S02] table not found - but exist

You might read this question every day so i tried another Stackoverflow's answer before asking:
CakePHP table is missing even when it exists
Anyways. The table i try to select data from does exist (quadra-checked uppercase/lowercase!) and it gets also listed via $db->->listSources().
Here's a screenshot of the query, the message and the last result from listing all Datasource's tables:
http://i.stack.imgur.com/CdhcV.png
Note: If i run this query in PHPMyAdmin manually it works fine. I would say its impossible to get the pictures output at one time in a view - now its up to you to tell me the opposite. By the way: I am pretty sure to use the correct Datasource.
I should tell additionally that the mysql-server is hosted on another platform. Since i can use it for my localhost-phpmyadmin if i modify the config.inc.php i can promise it is no Firewall-Problem.
Written in behalf of xcy7e:
The mistake was to execute the Query from the local Model. Here's the code:
$conn = ConnectionManager::getDataSource('myDB');
$conn->query($query);
// instead of $this->query($query);

Using 10+ fields with TranslateBehavior (causing 10+ inner joins) returns SQL error

I'm setting up a project in which I want to utilize CakePHP's Translate Behavior.
Everything seemed to work fine until I reached 10 fields that I wanted it to translate. The Translate Behavior creates an INNER JOIN for each field it's trying to retrieve - which I believe is what's causing this error (only happens with 10+):
SQLSTATE[42000]: Syntax error or access violation: 1104 The SELECT
would examine more than MAX_JOIN_SIZE rows; check your WHERE and use
SET SQL_BIG_SELECTS=1 or SET MAX_JOIN_SIZE=# if the SELECT is okay
Two questions:
1) I tried fixing it by running the two listed SQL commands, but still no luck - how can I get it to work?
2) Is it ideal/ok/acceptable to have 10-20+ translated fields if it's going to create an INNER JOIN for each one? Should I re-think using this behavior and maybe create something on my own?
Did You try SET OPTION SQL_BIG_SELECTS = 1 more on https://stackoverflow.com/a/950576/182823
1:
Its a mysql security option, what you can override. Use this code in before filter in your app controller to avoid this error.
function beforeFilter() {
$this->{$this->modelClass}->query('SET SQL_BIG_SELECTS=1');
...
}
2:
It's ok to join table multipled, but some advice:
always use indexes! without table cell indexes the query can be slow
if you have a lot of translatable datadata in lot of table use separated tables to translate, like if you have Contents and Posts use Content_I18n and Post_I18n
http://book.cakephp.org/2.0/en/core-libraries/behaviors/translate.html -> Multiple Translation Models
you can override Translate Behavior. I'm using SmoothTranslate Behavior
http://bakery.cakephp.org/articles/sky_l3ppard/2010/01/05/smoothtranslate-to-make-smooth-translations
I think the best way to use Translate behavior or extend it, do not write own, its good enough in CakePhp. And it will better in Cake 3. (i hope)

Renaming columns in a MySQL select statement with R package RJDBC

I am using the RJDBC package to connect to a MySQL (Maria DB) database in R on a Windows 7 machine and I am trying a statement like
select a as b
from table
but the column will always continue to be named "a" in the data frame.
This works normally with RODBC and RMySQL but doesn't work with RJDBC. Unfortunately, I have to use RJDBC as this is the only package that has no problem with the encoding of chinese, hebrew and so on letters (set names and so on don't seem to work with RODBC and RMySQL).
Has anybody experienced this problem?
I have run into the same frustrating issue. Sometimes the AS keyword would have its intended effect, but other times it wouldn't. I was unable to identify the conditions to make it work correctly.
Short Answer: (Thanks to Simon Urbanek (package maintainer for RJDBC), Yev, and Sebastien! See the Long Answer.) One thing that you may try is to open your JDBC connection using ?useOldAliasMetadataBehavior=true in your connection string. Example:
drv <- JDBC("com.mysql.jdbc.Driver", "C:/JDBC/mysql-connector-java-5.1.18-bin.jar", identifier.quote="`")
conn <- dbConnect(drv, "jdbc:mysql://server/schema?useOldAliasMetadataBehavior=true", "username", "password")
query <- "SELECT `a` AS `b` FROM table"
result <- dbGetQuery(conn, query)
dbDisconnect(conn)
This ended up working for me! See more details, including caveats, in the Long Answer.
Long Answer: I tried all sorts of stuff, including making views, changing queries, using JOIN statements, NOT using JOIN statements, using ORDER BY and GROUP BY statements, etc. I was never able to figure out why some of my queries were able to rename columns and others weren't.
I contacted the package maintainer (Simon Urbanek.) Here is what he said:
In the vast majority of cases this is an issue in the JBDC driver, because there is really not much RJDBC can do other than to call the driver.
He then recommended that I make sure I had the most recent JDBC driver for MySQL. I did have the most recent version. However, it got me thinking "maybe it IS a bug with the JDBC driver." So, I searched Google for: mysql jdbc driver bug alias.
The top result for this query was an entry at bugs.mysql.com. Yev, using MySQL 5.1.22, says that when he upgraded from driver version 5.0.4 to 5.1.5, his column aliases stopped working. Asked if it was a bug.
Sebastien replied, "No, it's not a bug! It's a documented change of behavior in all subsequent versions of the driver." and suggested using ?useOldAliasMetadataBehavior=true, citing documentation for the JDBC driver.
Caveat Lector: The documentation for the JDBC driver states that
useColumnNamesInFindColumn is preferred over useOldAliasMetadataBehavior unless you need the specific behavior that it provides with respect to ResultSetMetadata.
I haven't had the time to fully research what this means. In other words, I don't know what all of the ramifications are of using useOldAliasMetadataBehavior=true are. Use at your own risk. Does someone else have more information?
I don't know RJDBC, but in some cases when it is necessary to give permanent aliases to columns without renaming them, you can use VIEWs
CREATE OR REPLACE VIEW v_table AS
SELECT a AS b
FROM table
... and then ...
SELECT b FROM v_table
There is a separate function in the ResultSetMetaData interface for retrieving the column label vs the column name:
String getColumnLabel(int column) throws SQLException;
Gets the designated column's suggested title for use in printouts and
displays. The suggested title is usually specified by the SQL AS
clause. If a SQL AS is not specified, the value returned
fromgetColumnLabel will be the same as the value returned by the
getColumnName method.
Using getColumnLabel should resolve this issue (if not, check that your JDBC driver is following this spec).
e.g.
ResultSetMetaData rsmd = rs.getMetaData();
int columnCount = rsmd.getColumnCount();
while(rs.next()) {
for (int i = 1; i < columnCount + 1; i++) {
String label = rsmd.getColumnLabel(i);
System.out.println(rs.getString(label));
}
}
This is the work around we use for R and SAP HANA via RJDBC:
names(result)[1]<-"b"
It's not the nicest work around, but since Aaron's solution does work for us, we went with this "solution".