I have a QA setup with a master and a replica. Both are AWS RDS MySQL. It's provisioned with Terraform and the gist is like this
data "aws_db_snapshot" "latest_prod_snapshot" {
db_instance_identifier = var.production_instance_id
most_recent = true
}
resource "aws_db_instance" "qa_master" {
apply_immediately = true
snapshot_identifier = data.aws_db_snapshot.latest_prod_snapshot.id
availability_zone = var.qa_master_zone
instance_class = var.master_instance_class
identifier = var.master_name
parameter_group_name = var.parameter_group_name
auto_minor_version_upgrade = false
multi_az = false
performance_insights_enabled = true
performance_insights_retention_period = 7
vpc_security_group_ids = [var.security_group_id]
option_group_name = var.option_group_name
backup_retention_period = 5
skip_final_snapshot = true
enabled_cloudwatch_logs_exports = ["audit", "error", "slowquery"]
}
resource "aws_db_instance" "qa_replica" {
apply_immediately = true
replicate_source_db = aws_db_instance.qa_master.id
availability_zone = var.qa_replica_zone
instance_class = var.replica_instance_class
identifier = var.replica_name
parameter_group_name = var.parameter_group_name
auto_minor_version_upgrade = false
multi_az = false
performance_insights_enabled = true
performance_insights_retention_period = 7
vpc_security_group_ids = [var.security_group_id]
skip_final_snapshot = true
enabled_cloudwatch_logs_exports = ["audit", "error", "slowquery"]
}
When I want to update it from a new snapshot, the master is always marked for recreation and replica for "change in place". But replication stops working after the update. Is there a workaround for that? Am I doing something weird here? Can I somehow force replica to recreate too?
So far I have been doing terraform destroy before doing the terraform apply.
To trigger the replica being recreated you will need to have a ForceNew parameter on the replica resource change when the non replica changes.
With RDS there is a resource_id attribute that isn't normally surfaced in places people care about (you don't specify it, you don't use it to connect to it and it's not shown in the RDS console) but is different for each database instance (compared to the normal identifier which is specified).
This then brings us to the other part which is that that attribute needs to be in a ForceNew parameter on the replica resource. The obvious choices here are either the identifier or identifier_prefix parameters. The biggest impact here is that these are also used to identify the database instance from others but also used as part of the DNS address to connect to the database in the form of ${identifier}.${random_hash_of_account_and_region}.${region}.rds.amazonaws.com. So if you're needing to connect to the instance then you'll either need to have the client discover the replica address as it will now contain the randomly generated resource_id of the non replica instance or you'll need to have Terraform create a DNS record that either CNAMEs or aliases the RDS address.
So in your case you might want something like this:
data "aws_db_snapshot" "latest_prod_snapshot" {
db_instance_identifier = var.production_instance_id
most_recent = true
}
resource "aws_db_instance" "qa_master" {
apply_immediately = true
snapshot_identifier = data.aws_db_snapshot.latest_prod_snapshot.id
availability_zone = var.qa_master_zone
instance_class = var.master_instance_class
identifier = var.master_name
parameter_group_name = var.parameter_group_name
auto_minor_version_upgrade = false
multi_az = false
performance_insights_enabled = true
performance_insights_retention_period = 7
vpc_security_group_ids = [var.security_group_id]
option_group_name = var.option_group_name
backup_retention_period = 5
skip_final_snapshot = true
enabled_cloudwatch_logs_exports = ["audit", "error", "slowquery"]
}
resource "aws_db_instance" "qa_replica" {
apply_immediately = true
replicate_source_db = aws_db_instance.qa_master.id
availability_zone = var.qa_replica_zone
instance_class = var.replica_instance_class
identifier = "${var.replica_name}-${aws_db_instance.qa_master.resource_id}"
parameter_group_name = var.parameter_group_name
auto_minor_version_upgrade = false
multi_az = false
performance_insights_enabled = true
performance_insights_retention_period = 7
vpc_security_group_ids = [var.security_group_id]
skip_final_snapshot = true
enabled_cloudwatch_logs_exports = ["audit", "error", "slowquery"]
}
resource "aws_route53_zone" "example" {
name = "example.com"
}
resource "aws_route53_record" "replica_instance" {
zone_id = data.aws_route53_zone.example.zone_id
name = "qa-replica-database.${data.aws_route53_zone.example.name}"
type = "CNAME"
ttl = 60
records = [aws_db_instance.replica_qa.address]
}
Now, if the production snapshot changes then the qa_master database instance resource will be recreated which will lead to the qa_replica database instance resource also being recreated and then the Route53 record for the qa_replica instance will be updated with the new address, allowing you to always connect to the replica at qa-replica-database.example.com.
Related
I have one postgreSQL database which I am using inside my Grails application(configured in Datasource.groovy), lets call it DB1. Now I have other postgreSQL database which has lots of data inside it, lets call it DB2.
I am writing an data export procedure which takes in JSON data generated from DB2, make its domain objects and store it inside the DB1. This data is being sent from another software using DB2. The main problem is that both databases have different column names so it cannot be a direct import export.
PostgreSQL provides direct methods to generate JSON via SQL queries. Eg-
SELECT row_to_json(t)
FROM ( select id, descrizione as description from tableXYZ ) t
It returns a JSON output-
{"id":6013,"description":"TestABC"}
This JSON can be consumed by the code that I have made.
So I want to run this query on DB2 from inside grails application which has DB1 configured inside Datasource.groovy.
How to do that?
In your DataSource.groovy file, you need to create another data source to point at DB2. You can probably clone your dataSource definition for this, eg.
http://grails.github.io/grails-doc/2.3.9/guide/conf.html#multipleDatasources
dataSource_db2 {
pooled = true
jmxExport = true
driverClassName = "org.postgresql.Driver"
username = "XXX"
password = "YYY"
//noinspection GrReassignedInClosureLocalVar
dialect = PostgreSQLDialect
autoreconnect = true
useUnicode = true
characterEncoding = "utf-8"
tcpKeepAlive = true
//noinspection GroovyAssignabilityCheck
properties {
// See http://grails.org/doc/latest/guide/conf.html#dataSource for documentation
initialSize = 5
maxActive = 50
minIdle = 5
maxIdle = 25
maxWait = 10000
maxAge = 10 * 60000
timeBetweenEvictionRunsMillis = 1000 * 60 * 1 // 1 min
minEvictableIdleTimeMillis = 1000 * 60 * 5 // 5 min
numTestsPerEvictionRun = 3
validationQuery = 'SELECT 1'
validationQueryTimeout = 3
validationInterval = 15000
testOnBorrow = true
testWhileIdle = false
testOnReturn = false
defaultTransactionIsolation = Connection.TRANSACTION_READ_COMMITTED
removeAbandoned = true
removeAbandonedTimeout = 20 // 20s is a long query
logAbandoned = true // causes stacktrace recording overhead, use only for debugging
// use JMX console to change this setting at runtime
// the next options are jdbc-pool specific
// http://tomcat.apache.org/tomcat-7.0-doc/jdbc-pool.html#Common_Attributes
// https://tomcat.apache.org/tomcat-7.0-doc/api/org/apache/tomcat/jdbc/pool/PoolConfiguration.html
jmxEnabled = true
// ResetAbandonedTimer resets the timer upon every operation on the connection or a statement.
// ConnectionState caches the auto commit, read only and catalog settings to avoid round trips to the DB.
jdbcInterceptors = "ConnectionState;ResetAbandonedTimer;SlowQueryReportJmx(threshold=10000)"
abandonWhenPercentageFull = 25 // settings are active only when pool is full
}
}
To use it for database connection access, you can inject the javax.sql.DataSource into your Services, Controllers, Domain classes, or other Grails Artefakts.
eg.
import javax.sql.DataSource
import groovy.sql.GroovyResultSet
import groovy.sql.Sql
MyService {
DataSource dataSource_db2
def doQuery(String query) {
new Sql(dataSource_db2).rows(query)
}
}
To have a domain object use your db2 dataSource for GORM, add to your domain objects 'mapping' block:
static mapping = {
datasource 'db2'
}
If you want JNDI support, you can also add something like this in your resources.groovy:
xmlns jee: "http://www.springframework.org/schema/jee"
jee.'jndi-lookup'(id: "dataSource", 'jndi-name': "java:comp/env/jdbc/db1")
jee.'jndi-lookup'(id: "dataSource_db2", 'jndi-name': "java:comp/env/jdbc/db2")
I have used two mysql database in our projects. one database is connected the basic user information and another database used to store the daily activities. Now need to combine two database tables .
fetch user daily activity with user information , then need to join with master databases.
I found the solution in in PHP. But i want the solution on zend framework 1.12 ?
I used multidb functionality used to fetch different action .
resources.multidb.tb.adapter = "pdo_mysql"
resources.multidb.tb.host = "localhost"
resources.multidb.tb.username = "root"
resources.multidb.tb.password = ""
resources.multidb.tb.dbname = "#####"
resources.multidb.tb.default = true
resources.multidb.pl.adapter = "pdo_mysql"
resources.multidb.pl.host = "localhost"
resources.multidb.pl.username = "root"
resources.multidb.pl.password = ""
resources.multidb.pl.dbname = "#######"
But I want to query for join 2 tables in different databases.
example
SELECT db1.table1.somefield, db2.table1.somefield FROM db1.table1
INNER JOIN db2.table1 ON db1.table1.someid = db2.table1.someid WHERE
db1.table1.somefield = 'queryCrit';
Having in mind Zend's Join Inner declaration:
public function joinInner($name, $cond, $cols = self::SQL_WILDCARD, $schema = null)
And being '$this', for example, a Zend_Db_Table_Abstract implementation with adapter set to db1 (with _setAdapter()) and schema to "#####" (this is not really necessary because it'll use it as default):
$select = $this->select(true)->setIntegrityCheck(false)
->from(array('t1'=>'table1'),array('somefield')
->joinInner(array('t1b'=>'table1'),
't1.someid = t1b.someid',
array('t1b.somefield'),
'######')
->where('t1.somefield = ?', $queryCrit);
Please, note the the fourth parameter of the Inner Join method.
Hope this helps.
This is a question about implementing sharding through Sequelize.
Currently I have a sharded databases which I need to support in my app, but this gives me a lot of trouble. I tried to use your framework but did not succeed. I will explain the architecture:
Suppose I have 3 databases, one is "meta db" which stores map of user data (i.e. on which shard the user data is) and two shards with equal structure storing user data.
var MetaUser = sequelize.define("MetaUser", {
shardId: { type: DataTypes.INTEGER(10), allowNull: false }
}
};
var User = sequelize.define("User", {
name: { type: DataTypes.STRING, allowNull: false },
about: { type: DataTypes.STRING(2048) },
}
};
The problem is - all this databases can be on different servers so I need to create 3 different connections, or 3 different sequelize instances. And the following looks like (suppose dbConfig is read from json file and models are described in isolated files):
var metaConnection = new Sequelize(dbConfig.meta.database,
dbConfig.meta.username,
dbConfig.meta.password,
dbConfig.meta.options);
module.exports.dbMeta = metaConnection;
var user1Connection = new Sequelize(dbConfig.shard1.database,
dbConfig.shard1.username,
dbConfig.shard1.password,
dbConfig.shard1.options);
module.exports.dbUser1 = user1Connection;
var user2Connection = new Sequelize(dbConfig.shard2.database,
dbConfig.shard2.username,
dbConfig.shard2.password,
dbConfig.shard2.options);
module.exports.dbUser2 = user2Connection;
module.exports.MetaUser = metaConnection.import(__dirname + "/models/MetaUser");
module.exports.User1 = user1Connection.import(__dirname + "/models/User");
module.exports.User2 = user2Connection.import(__dirname + "/models/User");
This gives me problems:
I cannot reference one model from another in model's module since
I don't have access to other model's connection in this model.
I want to use User model in my project, and have no need in MetaUser,
but I have to handle User through MetaUser in order to calculate
necessary shard connection.
If someone experienced similar problems, follow me to the solution, please.
I have an grails application that uses MySQL for authentication purpose and another application that uses MSSQL for database stuff. I need to combine these together as one application. The datasource for MySQL contains the following
dataSource {
pooled = true
driverClassName = "org.h2.Driver"
username = "sa"
password = ""
}
The datasource for application using MSSQL contains the following
dataSource {
pooled = true
driverClassName = "com.microsoft.sqlserver.jdbc.SQLServerDriver" //jdbc driver downloaded from internet: sqljdbc4.jar and sqljdbc_auth.dll (see DisplayHistorical/grails-app/lib)
dialect = "org.hibernate.dialect.SQLServer2008Dialect"
ClassName = "org.hsqldb.jdbcDriver" //Original Code
// enable loggingSql to see sql statements in stdout
loggingSql = true
}
How would I combine these? I looked at the tutorial mentioned on this site (How do you access two databases in Grails) but it doesnt talk about adding the drivers
If you follow the link provided earlier, then you would end up with a datasource configuration like below:
environments {
production {
dataSource_authentication {
pooled = true
url = "jdbc:mysql://yourServer/yourDB"
driverClassName = "com.mysql.jdbc.Driver"
username = "yourUser"
password = "yourPassword"
........
}
dataSource {
pooled = true
driverClassName = "com.microsoft.sqlserver.jdbc.SQLServerDriver"
dialect = "org.hibernate.dialect.SQLServer2008Dialect"
........
}
}
}
Where ever required you can use the authentication datasource explicitly.
I am getting this error in one of my development machine. This error is not happening in other machine which is pointing to same database. Definitely both servers are not identical. I don't know what software which is missing in one server cause this issue. Both machine is running same OS 2008 R2.
using (MYDB.MyDB oDB = new MYDB.MyDB())
{
var query = from t in oDB.Products
where (_ProductId.HasValue?_ProductId==t.Productid:true)
select new Product()
{
ProductId = t.Productid,
ManufacturerId = t.Manufacturerid,
ManufacturingNumber = t.Manufacturingnumber,
CustomProduct = t.Iscustomproduct ? "Yes" : "No",
IsCustomProduct = t.Iscustomproduct,
SubCategoryName = t.Subcategory.Subcategoryname
};
return query.ToList();
}
Any help is highly appreciated
Thanks,
Senthilkumar
I can not reproduce the exception in a comparable case, but the part _ProductId.HasValue?_ProductId==t.Productid:true looks suspect. I would change it as follows and if you're lucky it also solves your problem, otherwise it's an improvement anyway:
var query = from t in oDB.Products;
if (_productId.HasValue)
{
query = query.Where(t => t.Productid == _productId.Value);
}
query = query.Select(t => new Product() {...
Another cause could be that Product.ProductId is not a nullable int.