Will NEWSEQUENTIALID() SQL always generate unique ID irrespective of Host computer? - sql-server-2008

This is a generic query, My scenario is: I have a DB(MS SQL) and create a table with a column as uniqueidentifier and assign the values using NEWSEQUENTIALID(), I know it will be unique id always. But what if I am deploying the same DB on three machine (2 machines are transactional DBs and the third is replication DB). In the replication DB, I will update the column to not assign value by itself. From the two transactional DBs, I will replicate the data to the replication DB daily. NOW THE QUERY IS, will the ids generated on the two transactional DB be unique when I replicate to the replication DB. ie. is the IDs generated unique across any machine? or is that only one machine?

Yes, it will still be globally unique.
Have a look as the MSDN page for it:
http://msdn.microsoft.com/en-gb/library/ms189786.aspx
By "Specified Computer" it is refering to the fact that the GUID will be greater than those previously generated. So being greater than the last generated is only guaranteed for that machine. It's uniqueness is Global.

Related

Foreign keys across different servers

I am looking to have one main database with global data such us users & subscriptions. Additionally, to that, each subscription will have its own database, i refer to this type of databases as children.
Databases will be located on different servers, those servers may change from time to time. Due to this child databases are not able to utilize (as far as I am aware) the benefit of foreign keys on data from the global database. i.e. linking a "tool" in the "tools" table, which has column user_id = 12, with the user in the global database.
The question is is it ok for me to include columns, in child databases, that will store ids referencing data in the global database? Is there facilities that I can put in place to recreate what foreign keys offer?
I am running MySQL 5.7, InnoDB engine. The system runs on Laravel 5.2.

How to change autoincrement offset and step value for only a single table?

I have a monolithic database which has a table with around 60 million rows.
The setup is master-master replicated and one of the masters writes to even autoincrement ids and other master writes to odd autoincrement ids.
But I want to change the setup so that I can use a step size of 4 and offsets 1 and 3 for a single table (the table in question) in the entire database.
Is it even possible?
No.
The MySQL documentation clearly states that the autoincrement offsets are a per MySQL instance settings and will be applied to all tables on a database.
The documentation can be seen at https://dev.mysql.com/doc/refman/8.0/en/replication-options-master.html#sysvar_auto_increment_increment.
It is not possible to restrict the effects of these two variables to a single table; these variables control the behavior of all AUTO_INCREMENT columns in all tables on the MySQL server. If the global value of either variable is set, its effects persist until the global value is changed or overridden by setting the session value, or until mysqld is restarted. If the local value is set, the new value affects AUTO_INCREMENT columns for all tables into which new rows are inserted by the current user for the duration of the session, unless the values are changed during that session.

Can Galera handle 600-4500 databases? [duplicate]

I want to use Galera cluster in our production environment, but i have some concerns;
Each table must have at least one explicit primary key defined.
Each table must run under InnoDB or XtraDB storage engine.
Chunk up your big transaction in batches. For example, rather than having one transaction insert 100,000 rows, break it up into smaller chunks of e.g., insert 1000 rows per transaction.
Your application can tolerate non-sequential auto-increment values.
Schema changes are handled differently.
Handle hotspots/Galera deadlocks by sending writes to a single node.
I will like some clarification for all aforementioned points.Also we have over 600 databases in production, can galera work in this Environment??
Thanks
That is a LOT to handle in one shot. There are two issues, table creation (invloves Schema, see point 5) and applications that use those tables. I'll try:
1)Each table must have at least one explicit primary key defined.
When you are creating a table, you can't have any table that DOES NOT have a primary key. Tables are created with fields and INDEXES. One of those indexes must be declared as PRIMARY KEY.
2)Each table must run under InnoDB or XtraDB storage engine.
When tables are created, the must have ENGINE=InnoDB or ENGINE=XtraDB. Galera does not handle the default MyISAM type tables
3)Chunk up your big transaction in batches. For example, rather than
having one transaction insert 100,000 rows, break it up into smaller
chunks of e.g., insert 1000 rows per transaction.
This is not related to your schema, but your application. Try not to have an application that INSERTs a lot of data in one transaction. Note that this will work, but is risky. This is NOT a requirement, but a suggestion.
4)Your application can tolerate non-sequential auto-increment values.
With a cluster, you can have multiple servers being updated. If a field is auto-incremented, each cluster member could be trying to increment the same field. Your application should NEVER EVER assume that the next ID is related to the previous ID. For auto-increment fields, do not IMPOSE a value, let the DB handle it.
5)Schema changes are handled differently.
The Schema is the description of the tables and indexes and not the transactions that add, delete or retrieve information. You have multiple servers, so a Schema change has to be handled with care, so that all servers do catch up.
6)Handle hotspots/Galera deadlocks by sending writes to a single node.
This is both application and DB related. A deadlock is a condition where 2 different parts of an app try to get a value (ValueA), as the DB to lock it so it can be changed, and then try to get another value (ValueB) for the same use. If another part tries to First Lock ValueB , then ValueA, we have a deadlock, Because each app has locked the next value of the other app. To avoid this, it's best tp write to only one server in the cluster and use the other servers for reading. Do note that you can still have deadlocks in your applications. But you can avoid Galera creating the situation.

Sync.\Maintaining updated data in 2 DATABASE TABLES(MYSQL)

I have 2 Databases
Database 1,
Database 2
Now Each Database has Table say Table 1(IN DATABASE 1) and Table 2(IN DATABASE 2).
Table 1 is Basically a Copy of Table 2(Just for Backup).
How can i Sync Table 2 if Table 1 is Updated?
I am using MYSQL,Storage Engine:InnoDBand in back-end programming i am using php.
Further i can check for update after every 15 minutes using php script but it takes too much time because each table has51000 rows.
So, How can i achieve something like if Administrator/Superuser updates table 1, that update should me immediately updated in Table 2.
Also, is there a way where Bi-Directional Update can work i.e Both can be Masters?
Instead Table 1 as the only master, Both Table 1 and Table 2 can be Master's? if any update is done at Any of the tables other one should update accordingly?
If not wrong, what you are looking for is Replication which does this exact thing for you. If you configure a Transnational Replication then every DML operation will get cascaded automatically to the mirrored DB. So, no need for you to do continuously polling from your application.
Quoted from MySQL Replication document
Replication enables data from one MySQL database server (the master)
to be replicated to one or more MySQL database servers (the slaves).
Replication is asynchronous - slaves need not be connected permanently
to receive updates from the master. This means that updates can occur
over long-distance connections and even over temporary or intermittent
connections such as a dial-up service. Depending on the configuration,
you can replicate all databases, selected databases, or even selected
tables within a database.
Per your comment, Yes Bi-Directional Replication can also be configured.
See Configuring Bi-Directional Replication
As Rahul stated, what you are looking for is replication.
The standard replication of mysql is master -> slave which means that one of the databases is "master", the rest slaves. All changes must be written to the master db and will then be copied to the slaves. More info can be found in the mysql documentation on replication.
There is aslo an excellent guide on the digitaloceans community forums on master <-> master replication setup.
If the requirements for "Administrator/Superuser" weren't in your question, you could use the mysql's Replication functions on the databases.
If you want the data to be synced immediately to the Table2 upon inserting in Table1, you could use a trigger on the table. In that trigger you can check which user (if you have a column in that table specifying which user inserted the data) submitted data. If the user is an admin, configure the trigger to duplicate the data, if the user is a normal user, don't do anything.
Next for normal users entering data, you could keep an counter on each row, increasing by 1 if it's a new 'normal' user's data. Again in the same trigger, you could also check for what number the counter already is. Let's say if you reach 10, then duplicate all the rows to the other table and reset the counter + remove the old counter values from the just-duplicated-rows.

Can I safely change data in a replicated SQL Server table on the target machine?

I replicate a table from one SQL Server 2008 instance to another which already works fine. I'd also like to set a field in the replicated table on the target instance of the SQL Servers, but I'm not quite sure whether this is allowed (even though it seems to work).
Reason behind this: Replication from server A to B, processing of rows on server B and then setting (e.g. a flag such as "processed") when the row has been processed. This information is not available on server A and can only be set on server B.
A more cumbersome way would involve a separate table on server B which would have to keep entries of IDs in the replicated table that have already been processed, but maybe this is not necessary?
With Transactional Replication, by default the Subscribers should be treated as read-only. This is because if data has been changed at a Subscriber - whether it be inserts, updated, or deletes - it can not only cause data consistency errors, but a reinitialization could wipe the Subscriber data out as article #pre_creation_cmd is set to drop by default.
If you're going to be updating data at the Subscribers then I would suggest using Updatable Subscriptions for Transactional Replication, Peer to Peer Replication, or Merge Replication.