I have set up a DMS with RDS MYSQL as source endpoint and REDSHIFT as target endpoint with "full and CDC".
setup is working fine and even update, delete stmts are being replicated to Redshift. however when i create a new table in my source RDS MYSQL it's not being repliacted to targer Redshift.
please note- there isnt any primary key assosiated with the new table.
So this is because, whenever a new table is created the DMS user (mysql user) does not have access to read these new tables. You will have to explicitly grant permission to the user to read this new table-
GRANT SELECT ON SCHEMA.TABLE_NAME TO dmsuser_readonly;
Then add supplement logging to allow the user to access the logs for the table-
ALTER TABLE SCHEMA.TABLE_NAME ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;
PS: Allow all accesses to the dmsuser using the owner schema user.
Let me know in case of any issues.
To specify the table mappings that you want to apply during migration, you can create a JSON file. If you create a migration task using the console, you can browse for this JSON file or enter the JSON directly into the table mapping box. If you use the CLI or API to perform migrations, you can specify this file using the TableMappings parameter of the CreateReplicationTask or ModifyReplicationTask API operation.
Example of Migrate some tables in a schema
{
"rules": [
{
"rule-type": "selection",
"rule-id": "1",
"rule-name": "1",
"object-locator": {
"schema-name": "Test",
"table-name": "%"
},
"rule-action": "include"
}
]
}
create a new rule like above mentioned format and specify your table
name.
rule-id and rule-name should be unique.
for more information please checkout this
Related
I have an ODBC connection created in MS Access 365. This is used to create a linked table in Access to a view in my SQL DB. I am using a SQL login to authenticate with SQL DB. The login has the datareader role set for that DB
If I make any changes to a record in the linked table in Access, those changes are also made in the SQL DB.
How can I avoid any changes in the Access linked table being propagated into the SQL DB?
You can add a trigger to the view like this:
CREATE TRIGGER dbo.MySampleView_Trigger_OnInsertOrUpdateOrDelete]
ON dbo.MySampleView
INSTEAD OF INSERT, UPDATE, DELETE
AS
BEGIN
RAISERROR ('You are not allow to update this view!', 16, 1)
END
You can also in a pinch add a union query, but it is somewhat ugly - you have to match up the column types. Eg:
ALTER VIEW dbo.MySampleView
as
SELECT col1, col2 FROM dbo.MySampleTable
UNION
SELECT NULL, NULL WHERE 1 =0
You can also create a new schema. Say dboR (for read only).
Add this new schema to the database. Then add the schema to the sql user/logon your using for this database. And then set the permissions to the schema to read only.
You have to re-create the view and choose the read-only schema for this view. On the access client side, I in most cases remove the schema default "dbo_" and just use the table (or view name) - you can thus continue to use the same view name you have been using in Access.
I'm trying to Sync data from RDS MySQL to Amazon Redshift. For that, created a Data Pipeline scheduled for run once. Synced one table then tried with another table named 'roles' but it failed with the following error message "output table named 'public.roles' doesn't exist and no createTableSql was provided". The actual result of pipeline is as follows.
RedshiftTableCreateActivity - Finished
RDSToS3CopyActivity - Finished
S3ToRedshiftCopyActivity - FAILED ("output table named 'public.roles' doesn't exist and no createTableSql was provided")
S3StagingCleanupActivity - CASCADE_FAILED
For the Pipeline, tried with Truncate/OVERWRITE_EXISTING insert modes.
Can anyone help me on this ?
It seems that your redshift table "roles" does not exist.
Also, you can specify the createTableSql to be "create table if not exists roles(your table definition)"
We have a Business Objects X14 universe and we have 2 oracle database schemas (BoDB, BoDB_CONNECT)
BoDB_CONNECT has to execute the queries using the BoDB schema.
I dont have password for BoDB schema.
So, basically I want to login with BoDB_CONNECT username and execute the reports using BoDB schema (by appending BoDB.TABLENAME).
But while I am creating the connection there is no separate schemaname in the Universe. It just have the username and password.
I dont want to hardcode the ownername of every table with BoDB. Is there any way to dynamically do this?
First, you have to be given access to the tables or nothing will work. That is, BoDB_CONNECT needs to have SELECT permissions to all of the tables in the BoDB schema that will be referenced in the universe.
Once that is done, you have a few options to implement your requirement. The most straightforward way is to simply include the schema owner with the table name. This happens automatically when you drag in a table to your model in UDT or IDT, and is the recommended solution.
You can also easily switch the owners, if, for example, the tables are moved to a new schema. Select all of the tables to move, and right-click. In UDT, select "Rename Table" and in IDT select "Change Qualifier/Owner". You can then set the new owner name and that will be applied to all selected tables.
If, for some reason, you won't want the schema name associated with the table, there are two options:
Create a private synonym in the BoDB_CONNECT schema for each table to be referenced in BoDB (ex. create synonym foo for bodb.foo). Thus, the universe will just have a reference to foo. Note, however, that BI4.1 does not currently support private synonyms in UDT/IDT. If you create objects that reference private synonyms, they will work correctly in WebI, but they will not parse in UDT/IDT. I believe this is a bug (since it worked in all prior versions), and I have a support case open with SAP currently.
Switch the default schema. You can change the BEGIN_SQL parameter to set the default schema. In UDT this is done via File->Parameters-Parameter tab; in IDT it's Data Foundation->Properties->Parameters. In either case, you'd set the value of BEGIN_SQL to ALTER SESSION SET CURRENT_SCHEMA=bodb. This statement will be executed at the start of each query session, so references to foo will resolve to bodb.foo. Note, however, that this does not apply to actions within IDT/UDT itself; so you will get parse errors on objects that don't have an owner specified, but the queries will work in WebI.
My question :
How can i sync 2 mysql databases (offline local database with master online database) ?
problem is database is relational,and id as always is auto incremenate, so if i just sync using insert it will mess with my referals.
this for a clinic management app i made, problem is currently its on
server but sometimes internet connection goes down/slow on my users
clinic, so i need to let him work on offline mode (store every thing
to local db) and manually sync (bi-directional) with remote database
at end of day.
so basically each clinic should have its own local db and let them all sync to central db
example of tables.
db.Cental.users
|id|user|clinic |
|01|demo|day care|
|02|nurs|er |
|03|demX|day care|
db.day care.users
|id|user|clinic |
|01|demo|day care|
|02|demX|day care|
(note id doesnt necessarily match between central and local db, yet structure of db on both is identical)
example:
database info:
each user have many visits,plugins.
each visit contain 1user as patient_id, and 1user as doctor_id
each plugin have one user, many inputs
plugin_inputs have one plugin
i have 2 databases 1 on the server and other hosted locally -for offline mode-
what i want is to be able to sync locally db with online one, but since i have more than 1 user, more than one local db so each one will have nearly same id's while online db should contain all of them combined.
so how can i sync them together ?
i user php/mysql(online)/sqlite(local)
There are a couple of things you can do.
Firstly, you could consider a composite primary key for the user table - for instance, you could include "Clinic" in the primary key. That way, you know that even if the auto increment values overlap, you can always uniquely identify the user.
The "visit" table then needs to include patient_clinic and doctor_clinic in the foreign keys.
Alternatively - and far dirtier - you can simply set the auto increment fields for each clinic to start at different numbers: ALTER TABLE user AUTO_INCREMENT = 10000; will set the first auto_increment for the user table to 10000; if the next clinic sets it at 20000 (or whatever), you can avoid overlaps (until you exceed that magic number, which is why it's dirty).
I'd be tempted to redesign the application architecture to have an API at the "remote" end and a job that runs at the end of each day at the "local" end to post all the "local" data to the "remote" API.
This way you could have the "local" job create a JSON message similar to this:
{ user_visit:
{
user_id: 1,
doctor_id: 1,
plugins: [
{
plugin_name: name,
data: {
data_field1: "VALUE",
data_field2: "VALUE2",
data_fieldxx: "VALUExx"
}
]
}
}
And send each message to the API. In the API, you would then iterate over the JSON message, reconstructing the relationships and inserting them into the "remote" database accordingly.
Create the "id" and "id_offline" column in the local database. make all relationships between tables with "id_offline" and use automatic increment.
The "id" will match the MySQL "id" and will start with a null value.
When sending the data to the server, a check will be made if the "id" is equal to null, if it is, "insert" if not "update", after that use the MySQL "RETURNING" function to return the automatically generated id and return this information to the device that made the request.
When the device receives the new ids, update the "id" column of the local database to be the same as MySQL.
I have a table named Email in models.py. I want to add additional columns to it. I have tried adding the additional column to the models.py file, saving it, and then doing a
$ python manage.py syncdb,
but it is not updating the table columns (I imagine because it recognizes that the table already exists in the database and skips over it).
How do I update a table that already exists in django?
syncdb creates tables if the table does not already exist. Any alterations in table that already exists is not handled by syncdb. Either you have to manually alter the tables or use a migration tool like south.
Django does not support automatic changes to the schema. See the Django book chapter on models, in particular the section titled "Making Changes to a Database Schema". Some third-party support for database migration is available.