How to use MySQL and MSSQL together in the grails datasource? - mysql

I have an grails application that uses MySQL for authentication purpose and another application that uses MSSQL for database stuff. I need to combine these together as one application. The datasource for MySQL contains the following
dataSource {
pooled = true
driverClassName = "org.h2.Driver"
username = "sa"
password = ""
}
The datasource for application using MSSQL contains the following
dataSource {
pooled = true
driverClassName = "com.microsoft.sqlserver.jdbc.SQLServerDriver" //jdbc driver downloaded from internet: sqljdbc4.jar and sqljdbc_auth.dll (see DisplayHistorical/grails-app/lib)
dialect = "org.hibernate.dialect.SQLServer2008Dialect"
ClassName = "org.hsqldb.jdbcDriver" //Original Code
// enable loggingSql to see sql statements in stdout
loggingSql = true
}
How would I combine these? I looked at the tutorial mentioned on this site (How do you access two databases in Grails) but it doesnt talk about adding the drivers

If you follow the link provided earlier, then you would end up with a datasource configuration like below:
environments {
production {
dataSource_authentication {
pooled = true
url = "jdbc:mysql://yourServer/yourDB"
driverClassName = "com.mysql.jdbc.Driver"
username = "yourUser"
password = "yourPassword"
........
}
dataSource {
pooled = true
driverClassName = "com.microsoft.sqlserver.jdbc.SQLServerDriver"
dialect = "org.hibernate.dialect.SQLServer2008Dialect"
........
}
}
}
Where ever required you can use the authentication datasource explicitly.

Related

Recreate replica when master is recreated

I have a QA setup with a master and a replica. Both are AWS RDS MySQL. It's provisioned with Terraform and the gist is like this
data "aws_db_snapshot" "latest_prod_snapshot" {
db_instance_identifier = var.production_instance_id
most_recent = true
}
resource "aws_db_instance" "qa_master" {
apply_immediately = true
snapshot_identifier = data.aws_db_snapshot.latest_prod_snapshot.id
availability_zone = var.qa_master_zone
instance_class = var.master_instance_class
identifier = var.master_name
parameter_group_name = var.parameter_group_name
auto_minor_version_upgrade = false
multi_az = false
performance_insights_enabled = true
performance_insights_retention_period = 7
vpc_security_group_ids = [var.security_group_id]
option_group_name = var.option_group_name
backup_retention_period = 5
skip_final_snapshot = true
enabled_cloudwatch_logs_exports = ["audit", "error", "slowquery"]
}
resource "aws_db_instance" "qa_replica" {
apply_immediately = true
replicate_source_db = aws_db_instance.qa_master.id
availability_zone = var.qa_replica_zone
instance_class = var.replica_instance_class
identifier = var.replica_name
parameter_group_name = var.parameter_group_name
auto_minor_version_upgrade = false
multi_az = false
performance_insights_enabled = true
performance_insights_retention_period = 7
vpc_security_group_ids = [var.security_group_id]
skip_final_snapshot = true
enabled_cloudwatch_logs_exports = ["audit", "error", "slowquery"]
}
When I want to update it from a new snapshot, the master is always marked for recreation and replica for "change in place". But replication stops working after the update. Is there a workaround for that? Am I doing something weird here? Can I somehow force replica to recreate too?
So far I have been doing terraform destroy before doing the terraform apply.
To trigger the replica being recreated you will need to have a ForceNew parameter on the replica resource change when the non replica changes.
With RDS there is a resource_id attribute that isn't normally surfaced in places people care about (you don't specify it, you don't use it to connect to it and it's not shown in the RDS console) but is different for each database instance (compared to the normal identifier which is specified).
This then brings us to the other part which is that that attribute needs to be in a ForceNew parameter on the replica resource. The obvious choices here are either the identifier or identifier_prefix parameters. The biggest impact here is that these are also used to identify the database instance from others but also used as part of the DNS address to connect to the database in the form of ${identifier}.${random_hash_of_account_and_region}.${region}.rds.amazonaws.com. So if you're needing to connect to the instance then you'll either need to have the client discover the replica address as it will now contain the randomly generated resource_id of the non replica instance or you'll need to have Terraform create a DNS record that either CNAMEs or aliases the RDS address.
So in your case you might want something like this:
data "aws_db_snapshot" "latest_prod_snapshot" {
db_instance_identifier = var.production_instance_id
most_recent = true
}
resource "aws_db_instance" "qa_master" {
apply_immediately = true
snapshot_identifier = data.aws_db_snapshot.latest_prod_snapshot.id
availability_zone = var.qa_master_zone
instance_class = var.master_instance_class
identifier = var.master_name
parameter_group_name = var.parameter_group_name
auto_minor_version_upgrade = false
multi_az = false
performance_insights_enabled = true
performance_insights_retention_period = 7
vpc_security_group_ids = [var.security_group_id]
option_group_name = var.option_group_name
backup_retention_period = 5
skip_final_snapshot = true
enabled_cloudwatch_logs_exports = ["audit", "error", "slowquery"]
}
resource "aws_db_instance" "qa_replica" {
apply_immediately = true
replicate_source_db = aws_db_instance.qa_master.id
availability_zone = var.qa_replica_zone
instance_class = var.replica_instance_class
identifier = "${var.replica_name}-${aws_db_instance.qa_master.resource_id}"
parameter_group_name = var.parameter_group_name
auto_minor_version_upgrade = false
multi_az = false
performance_insights_enabled = true
performance_insights_retention_period = 7
vpc_security_group_ids = [var.security_group_id]
skip_final_snapshot = true
enabled_cloudwatch_logs_exports = ["audit", "error", "slowquery"]
}
resource "aws_route53_zone" "example" {
name = "example.com"
}
resource "aws_route53_record" "replica_instance" {
zone_id = data.aws_route53_zone.example.zone_id
name = "qa-replica-database.${data.aws_route53_zone.example.name}"
type = "CNAME"
ttl = 60
records = [aws_db_instance.replica_qa.address]
}
Now, if the production snapshot changes then the qa_master database instance resource will be recreated which will lead to the qa_replica database instance resource also being recreated and then the Route53 record for the qa_replica instance will be updated with the new address, allowing you to always connect to the replica at qa-replica-database.example.com.

Can't update table in mysql in a Grails application

I have this weird problem. I have a User domain class in a Grails app. The class is as follows:
class User {
transient springSecurityService
String username
String name
String password
String email
String company
Date activationDate
String contactPhone
boolean enabled
boolean passwordExpired = false
boolean accountExpired
boolean accountLocked
boolean isDeleted=false
boolean isPrimary
String jobTitle
String jobFunction
String deskPhone
String mobile
String profilePicURL
boolean isLinkExpired=false
UserType userType
Date dateCreated
Date lastUpdated
static constraints = {
password nullable: true
company nullable: true
email blank: false, unique: true
name nullable: true
activationDate nullable:true
username nullable: true
enabled nullable: false
isDeleted nullable: false
passwordExpired nullable: false
jobFunction nullable:true
jobTitle nullable:true
contactPhone nullable:true
mobile nullable:true
profilePicURL nullable:true
deskPhone nullable:true
userType nullable:true
}
static auditable = true
static mapping = {
password column: '`password`'
tablePerHierarchy false
cache true
}
Set<Role> getAuthorities() {
UserRole.findAllByUser(this).collect { it.role } as Set
}
And there is a method activeDeactiveUser which enables/disables user authorization for some functionality as follows:
def activeDeactiveUser(String username) {
def user = User.findByUsername(username)
if (user.enabled == false)
user.enabled = true
else
user.enabled = false
if (user.validate()) {
user.save(flush: true, failOnError: true)
} else {
user.errors.allErrors.each {
print it
}
}
def userJson = new JSONObject()
userJson.put("isEnabled", user.enabled)
return userJson
}
When the app is running on localhost, the table is updating fine. But when the same code is running on server, the table fails to update. I don't know why it's behaving like this.
The app isn't raising any exception on the save method on localhost. May be the problem is with different versions of mysql on my machine and the server. Is there any the to debug the app while it is running on the server?
The app is hosted in an AWS EC2 instance running Ubuntu 14.04 and Grails version 2.4.3. The database is stored in an AWS RDS instance running mysql 5.5.40.
there are many reasons for it - I think you need to provide more information for yourself and in this thread so we can help.
I suggest, first add log information by one of the options:
you can add logSql to dataSource file:
dataSource {
logSql = true
}
to produce far more readable SQL commands than simply logSql would do add the following properties in DataSource.groovy:
hibernate {
format_sql = true
use_sql_comments = true
}
Then, add the following log4j settings to Config.groovy:
log4j = {
debug 'org.hibernate.SQL'
trace 'org.hibernate.type'
}
The first setting logs the SQL commands, the second one logs the bound parameters and the bindings of the result set.
the issue can also be related to schema update - so maybe your local DB schema is not in sync with server one. you ned to check field type and constraints.

Grails run query postgresql on second database to generate Raw JSON

I have one postgreSQL database which I am using inside my Grails application(configured in Datasource.groovy), lets call it DB1. Now I have other postgreSQL database which has lots of data inside it, lets call it DB2.
I am writing an data export procedure which takes in JSON data generated from DB2, make its domain objects and store it inside the DB1. This data is being sent from another software using DB2. The main problem is that both databases have different column names so it cannot be a direct import export.
PostgreSQL provides direct methods to generate JSON via SQL queries. Eg-
SELECT row_to_json(t)
FROM ( select id, descrizione as description from tableXYZ ) t
It returns a JSON output-
{"id":6013,"description":"TestABC"}
This JSON can be consumed by the code that I have made.
So I want to run this query on DB2 from inside grails application which has DB1 configured inside Datasource.groovy.
How to do that?
In your DataSource.groovy file, you need to create another data source to point at DB2. You can probably clone your dataSource definition for this, eg.
http://grails.github.io/grails-doc/2.3.9/guide/conf.html#multipleDatasources
dataSource_db2 {
pooled = true
jmxExport = true
driverClassName = "org.postgresql.Driver"
username = "XXX"
password = "YYY"
//noinspection GrReassignedInClosureLocalVar
dialect = PostgreSQLDialect
autoreconnect = true
useUnicode = true
characterEncoding = "utf-8"
tcpKeepAlive = true
//noinspection GroovyAssignabilityCheck
properties {
// See http://grails.org/doc/latest/guide/conf.html#dataSource for documentation
initialSize = 5
maxActive = 50
minIdle = 5
maxIdle = 25
maxWait = 10000
maxAge = 10 * 60000
timeBetweenEvictionRunsMillis = 1000 * 60 * 1 // 1 min
minEvictableIdleTimeMillis = 1000 * 60 * 5 // 5 min
numTestsPerEvictionRun = 3
validationQuery = 'SELECT 1'
validationQueryTimeout = 3
validationInterval = 15000
testOnBorrow = true
testWhileIdle = false
testOnReturn = false
defaultTransactionIsolation = Connection.TRANSACTION_READ_COMMITTED
removeAbandoned = true
removeAbandonedTimeout = 20 // 20s is a long query
logAbandoned = true // causes stacktrace recording overhead, use only for debugging
// use JMX console to change this setting at runtime
// the next options are jdbc-pool specific
// http://tomcat.apache.org/tomcat-7.0-doc/jdbc-pool.html#Common_Attributes
// https://tomcat.apache.org/tomcat-7.0-doc/api/org/apache/tomcat/jdbc/pool/PoolConfiguration.html
jmxEnabled = true
// ResetAbandonedTimer resets the timer upon every operation on the connection or a statement.
// ConnectionState caches the auto commit, read only and catalog settings to avoid round trips to the DB.
jdbcInterceptors = "ConnectionState;ResetAbandonedTimer;SlowQueryReportJmx(threshold=10000)"
abandonWhenPercentageFull = 25 // settings are active only when pool is full
}
}
To use it for database connection access, you can inject the javax.sql.DataSource into your Services, Controllers, Domain classes, or other Grails Artefakts.
eg.
import javax.sql.DataSource
import groovy.sql.GroovyResultSet
import groovy.sql.Sql
MyService {
DataSource dataSource_db2
def doQuery(String query) {
new Sql(dataSource_db2).rows(query)
}
}
To have a domain object use your db2 dataSource for GORM, add to your domain objects 'mapping' block:
static mapping = {
datasource 'db2'
}
If you want JNDI support, you can also add something like this in your resources.groovy:
xmlns jee: "http://www.springframework.org/schema/jee"
jee.'jndi-lookup'(id: "dataSource", 'jndi-name': "java:comp/env/jdbc/db1")
jee.'jndi-lookup'(id: "dataSource_db2", 'jndi-name': "java:comp/env/jdbc/db2")

Can I get steps to connect my grails to MSSQL?

Please help me with steps to connect grails to SQLserver2008.
I have been trying with setting over grails app-config.properties and DataSource.groovy file too.
File 1 : app-config.properties
dataSource.dialect =org.hibernate.dialect.SQLServerDialect
dataSource.driverClassName=com.microsoft.sqlserver.jdbc.SQLServerDriver
dataSource.url=jdbc:sqlserver:thin:#xxx.xxx.x.:1433;databaseName=xxxx
dataSource.username=sa
dataSource.password=P#ssw0rd1x
File 2 : Config.groovy
activiti {
processEngineName = "activiti-engine-default"
databaseType = "sql"
databaseSchemaUpdate = true
deploymentName = appName
Not sure if it will work with the mssql-2008, but for my application connection with mssql 2012 I have used following changes to connect it with the database:
Step1: On the datasource.groovy changed the datasource to:
dataSource {
pooled = true
driverClassName = "net.sourceforge.jtds.jdbc.Driver"
url = "jdbc:jtds:sqlserver://localhost:1433;DatabaseName=<Db-Name>"
username = "sa"
password = "root"
dbCreate = "update" // one of 'create', 'create-drop','update'
logSql = true
}
Step2. Then I have placed jtds-1.2.5.jar and jtidy-r938.jar files in the lib folder.
And I think it should connect with the mssql db, for me it worked with sql server 2012, hope it should also work with 2008.
I think need not to make any other change anywhere else. :)

Codeigniter remote connection does not return queries

I have a local and a remote connection with my mysql database. The local connection works just fine. But the remote connection, while it makes a connection, it does not return anything. I usually get the following:
Fatal error: Call to a member function result() on a non-object
I use for the remote connection the following configuration:
$db['mydb']['hostname'] = "ip_address_of_database";
$db['mydb']['username'] = "username";
$db['mydb']['password'] = "password";
$db['mydb']['database'] = "database";
$db['mydb']['dbdriver'] = "mysql";
$db['mydb']['dbprefix'] = "";
$db['mydb']['pconnect'] = FALSE;
$db['mydb']['db_debug'] = FALSE;
$db['mydb']['cache_on'] = FALSE;
$db['mydb']['cachedir'] = "";
$db['mydb']['char_set'] = "utf8";
$db['mydb']['dbcollat'] = "utf8_general_ci";
In my function that accesses the database I check if there is a connection with the remote server and then I try to retrieve data.
$mydb = $this->load->database('mydb', TRUE);
if (!isset($mydb->conn_id) && !is_resource($mydb->conn_id)) {
$error = 'database is not connected';
return $error;
}else{
$query = $mydb->query("SELECT * FROM database LIMIT 1;");
return $query->result();
}
This works fine in the localhost database but not in the remote database. I allways get the error
Fatal error: Call to a member function result() on a non-object
Can you please help? What am I doing wrong? I stuck on this.
Finally, I found the solution after contacting my web hosting provider. The issue had to do with the Remote database access and their servers. The IP address exception and the domain name that I had added didn’t do the job. I had to add an internal domain name that my host was using in order the Remote database access to be allowed. I spent 2-3 hours chatting with them in order to find a solution.
Anyway now is solved. I am posting that FYI.