My universe in development uses development database (prefix 'D_').
I have one universe in Integration environment.
This is for end-user's tests. A new version of the universe is just copied to the Integation env from development env.
The universe in the Integration env has ti use another database (prefix 'I_').
I tried to solve this by defining a data security profile on the universe in the Integration env. (I defined profile, and on 'Tables' tab I defined the corresponding tables).
The thing is - it is functioning as expected for tables, but not for aliases. Moreover, when I try to insert the new 'tranisition' from D_ table to the corresponding I_ table, I click on 'INSERT', but no ALIAS tables are shown.
QUESTION: How to define these 'transitions' from D_ to I_ alias tables?
We can only replace the universe tables by the database table by using security profile. I don't have much idea about new option table qualifiers.
Have you checked the option "change table qualifiers and owners which is there in data foundation while doing the right click on a table.
Thanks,
Sachin
Related
Use case: We have come across one scenario where we need to migrate the tables/records in a table from one environment (e.g. dev) to another environment (e.g. QA/Test). RDS (MySQL) server is different for both dev and QA environment.
Problem Statement: We need to migrate one table or selective records from a table from dev to QA (assuming database and table is already created in QA environment). Before migrating the table/records we need to make some update to the columns which holds environment specific details like vpc_id, subnet_id etc.
Approach we had tried: using mysqldump utility, we created sql file containing insert statements and then we manually updated the vpc_id and subnet_id column values in insert statements. Once sql file is updated manually, we executed the sql dump file into QA environment.
Need help: How to create an automated solution to perform this task? there can be multiple scenarios like - record already exists in the QA table, record exists in QA table but few columns are updated while migrating from dev.
Please let me know if more details are required on this problem.
I suggest using a different approach: use DB migrations scripts in all environments, and not transfer certain data between environments. In this way, you can customize data from environment variables or some config file for each environment.
Is there a suggested way to use "schemas" in mysql? For example, if I have one database called events and then I want to have two environments dev and prod, what might be a way to do that? Currently I add a table prefix, but it seems a a bit hack-ish:
you create a separate database for that, because MySQL does not have the concept of schema like e.g. PostgreSQL does.
You create one database for production e.g. prod_database with the table names event and event_type. and one database for dev e.g. dev_database, with the same table names event and event_type. As you always want to have the same table names in different environments.
You could (and should) even use the same database name, if you host the database on different servers. Which for production and development/staging would also make sense e.g. to test server version updates on one setup without affecting production.
I'm developping a web platform to manage student registrations in schools of my region. For that I have 17 databases running on MySQL (5.7.19) where there is one which is the main database and the 16 others represent schools. Schools databases (must) have the exactly the same schema, each containing data corresponding to the associated school. I separated this way to avoid latency as each school can register many applications (16k on average), so the requests could get heavier over time.
Now I have a serious problem: when I change the schema of a school's database, I have to manually do it for those of other schools to keep the schema consistency because my sql requests are made independently of the school. For example, if i add a new field in table_b of database_school5, i have to manually do the same on table_b of all remaining databases.
What can I do to manage theses changes efficiently? Is there an automatic solution? Is there an adapted DBMS for this problem?
Somebody told me that PostgreSQL can achieve this easily with INHERITANCE, but this only concerns the tables, unless I've done some poor research.
I want every time I make a change to a database schema, whether it is adding a table, adding a field, removing a field, adding a constraint, etc., the changes are automatically transferred to the other databases.
Thanks in advance.
SELECT ... FROM information_schema.tables
WHERE schema_name LIKE 'database_school%'
AND table_name != 'the 17th table'
AND schema_name != 'database_school5' -- since they have already done it.
That will find the 16 names. What you put into ... is a CONCAT(...) to construct the ALTER TABLE ... statements.
Then you do one of these:
Plan A: Manually copy/paste those ALTERs into mysql commandline tool to perform them.
Plan B: Wrap all of it in a Stored Procedure that will loop through the results of the SELECT and prepare+execute each one.
After reading a few articles here and around, I have realised that database version control in a development team is actually of high importance.
Until now I have been using a simple dump whole database each time there is an update, if only 1 table was altered sometimes we can get away with just dumping the single table then reimporting. Not the best but it works quite well, for additive changes and we haven't had any hiccups yet.
Now, I save a .mwb (Mysql Workbench diagram) file in the git repository of the project I'm working on.
Then I also use dbv for schema management, along with git, with each branch being named based on the project and it's working quite well. This allows me to version schematic changes with the ability to revert or rollback.
However, what about the data contained in the tables. How can this be maintained? Maybe I'm better off just sticking with the old method. I understand on projects with the same DB structure but different data that's fine but what about sites with specific database data that needs to be versioned and managed.
Also what about the base of already deployed sites that need database changes, how can this be seamless. Some have suggested the use of update/alter scripts and that works fine with default values and such. But what if I have made a change on a website platform that requires every websites database to be changed, and keep the data intact?
I've worked mostly in business application development and configuration management. Your question is representative for the challenges in such an environment; when you upgrade for instance Microsoft Word, you don't need to change all documents right away from doc to docx. And the documents even have a more simple structure a full relation database.
Not so for business applications; users skip releases, make unauthorized changes to the data model and the system needs to keep running and providing the correct numbers...
We use for our own applications (largest one is like 600 tables) a self-developed CASE tool which includes branching/merging, but the approach can also be done manually.
Versioning Datamodel
The data model can be written down in a structured way. For instance as table contents (CSV to be loaded in a table with meta data) or as code that detects the version in use and adds columns and tables when missing, including non-trivial migrations.
This even allows multiple users at the same time to change the data model.
When you use auto-detection (for instance, we use a call named "verify_column" instead of "add_column"), this even allows smooth migration independent of the release number the customer is starting the upgrade from. Such a procedure analyzes the table to be changed and issues the correct DDL such as alter table t1 add col1 number not null when a column is missing or alter table t1 modify col1 not null when the column was already present but nullable.
For Oracle and SQL Server I can provide you with a few sample procedures. In MySQL I would code this using a client side language, preferably OS independent to allow installations to run on Windows and Linux. Maybe using Apache Ant when you have experience with that.
Versioning Data
We split the tables in four categories:
R: referential data; data the application site must provide before he actually use the system. For instance, general ledger account codes. The referential data seldomly changes after go live and does not continuously grow in size. The contents reflect the site's business model where the application is used.
T: transaction data; data the site registers, changes and removes during use of the application. For instance, general ledger entries. The transaction data starts at 0 an grows continuously. When company doubles in revenues, transaction data also doubles.
S: seeded data; data NOT maintained by the user at the site but provided and maintained by the developing party. Essentially this is code turned into data. For example, 'F' stands for 'Female'. Errors in seeded data can lead to system errors.
O: the rest (ideally not needed, since they are technical, but some systems require a temporary table A or a scratch table B).
The contents of tables of category 'S' (seeded data) is placed under version control. We normally register these as metadata in our case tool, then named 'data sets', but you can also use for instance Microsoft Excel or even code.
For example, in Excel you would have a list of rows of seeded data. In column A you might enter an Excel function like =B..&"|"&C..& "|" & ... which concatenates everything and makes it suitable for loading by a loader tool.
For example in code, you might have a call like:
verifySeed('TABLE_A', 'CODE', 'VALUE')
The Excel is a little bit hard to bring under version control allowing multiple users to change contents at the same time. The approach with code is very simple.
Please remember to also add features to remove obsoleted seeded data. For instance, by explicitly listing obsoleted seeded data or by automatically removing all seeded data present in the tables but not touched by the last installation.
You would need to keep a journal of transactions on your datamodel that is synchronised to your code versions. For each update that adds information (i.e. a new field) you can simply enter the statements like 'ALTER TABLE x ADD COLUMN y ...' and provide a DEFAULT VALUE (with a function perhaps) in an update script. And a 'ALTER TABLE x REMOVE COLUMN y ...' for the downdate script. You would need to export your data before you truncate information in a table. You can convert the dumped table data to SQL for the inverse transaction so that you can add the missing information using these.
You can use a 'journal' table within your data-model to keep track of these transactions using simple ordinals that denote the applied scripts. Whenever the software is installed it can compare these numbers to create a list of transactions to play to move the database from state N to state X, backwards or forwards, without losing any data!
What is the use of SYNONYM in SQL Server 2008?
In some enterprise systems, you may have to deal with remote objects over which you have no control. For example, a database that is maintained by another department or team.
Synonyms can help you decouple the name and location of the underlying object from your SQL code. That way you can code against a synonym table even if the table you want is moved to a new server/database or renamed.
For example, I could write a query like this:
insert into MyTable
(...)
select ...
from remoteServer.remoteDatabase.dbo.Employee
but then if the server, or database, schema, or table changes it would impact my code. Instead I can create a synonym for the remote server and use the synonym instead:
insert into MyTable
(...)
select ...
from EmployeeSynonym
If the underlying object changes location or name, I only need to update my synonym to point to the new object.
http://www.mssqltips.com/sqlservertip/1820/use-synonyms-to-abstract-the-location-of-sql-server-database-objects/
Synonyms provide a great layer of abstraction, allowing us to use friendly and/or local names for verbosely named or remote tables, views, procedures and functions.
For Example
Consider you have the server1 and dbschema as ABC and table name as Employee and now you need to access the Employee table in your Server2 to perform a query operation.
So you have to use like Server1.ABC.Employee it exposes everything ServerName,SchemaName and TableName.
Instead of this you can create a synonym link Create Synonym EmpTable for Server1.ABC.Employee
So you can access like Select * from Peoples p1 inner join EmpTable emp where emp.Id=p1.ID
So it gives the advantages of Abstraction, Ease of change,scalability.
Later on if you want to change Servername or Schema or tablename, just you have to change the synonym alone and there is no need for you do search all and replace them.
If you used it than you will feel the real advantage of synonym. It can also combine with linked server and provide more advantages for developers.
An example of the usefulness of this
might be if you had a stored procedure
on a Users database that needed to
access a Clients table on another
production server. Assuming you
created the stored procedure in the
database Users, you might want to set
up a synonym such as the following:
USE Users; GO CREATE SYNONYM Clients
FOR Offsite01.Production.dbo.Clients;
GO
Now when writing the stored procedure
instead of having to write out that
entire alias every time you accessed
the table you can just use the alias
Clients. Furthermore, if you ever
change the location or the name of the
production database location all you
need to do is modify one synonym
instead of having to modify all of the
stored procedures which reference the
old server.
From: http://blog.sqlauthority.com/2008/01/07/sql-server-2005-introduction-and-explanation-to-synonym-helpful-t-sql-feature-for-developer/
Seems (from here) to create an alias for another table, so that you can refer to it easily. Like as
select * from table longname as ln
but permanent and pervasive.
Edit: works for user-defined functions, local and remote objects, not only tables.
I've been a long-time Oracle developer and making the jump to SQL Server.
But, another great use for synonyms is during the development cycle. If you have multiple developers modifying the same schema, you can use a synonym to point to your own schema rather than modifying the "production" table directly. That allows you to do your thing and other developers will not be impacted while you are making modifications and debugging.
I am glad to see these in SQL Server 2008...
A synonym is a database object that serves the following purposes:
Provides an alternative name for another database object, referred to as the base object, that can exist on a local or remote server.
Provides a layer of abstraction that protects a client application from changes made to the name or location of the base object.
Have never required the first one but the second issue is rather helpful.
msdn is your friend
You can actually create a synonym in an empty database and refer it to an object in another database, and thus make it work as it should even though it is in a completely empty database (besides the synonym that you created of course).