When generating database scripts, I'm scripting data to be migrated to a different environment. Is there a setting in the script generation that I can enable to set IDENTITY_INSERT on/off automatically so I don't have to go through each table manually in the generated script and set it? I'm using SSMS and I'd like to do this via SSMS.
Here's what I am getting:
INSERT my_table (my_table_id, my_table_name) VALUES (1, 'val1');
INSERT my_table (my_table_id, my_table_name) VALUES (2, 'val2');
Here's what I want:
SET IDENTITY_INSERT my_table ON
INSERT my_table (my_table_id, my_table_name) VALUES (1, 'val1');
INSERT my_table (my_table_id, my_table_name) VALUES (2, 'val2');
SET IDENTITY_INSERT my_table OFF
I know this is an old question, but the accepted answer and the comments to the accepted answer aren't quite correct regarding SSMS.
When using the generate scripts task in Sql Server Management Studio (SSMS) to generate scripts with data, set identity_insert statements will be included for tables that have an identity column.
In the object explorer: Tasks -> Generate Scripts -> [All Tables or selected tables] -> Advanced -> [Schema with Data or Data]
If the table to script data from does not have a column with the identity property , it will not generate the set identity_insert statements.
If the table to script data from does have a column with the identity property , it will generate the set identity_insert statements.
Tested & Confirmed using SSMS 2008 & SSMS 2012
In the OP's situation, I'm guessing the origin table did not have the identity property set for my_table_id in the source table, but the identity property was set for my_table_id in the destination table.
To get the desired output, change the table to script data from to have my_table_id to have the identity property.
This article explains in depth the steps to do this (without using the designer in SSMS): Add or drop identity property for an existing SQL Server column - Greg Robidoux
Create a new column with the identity property
Transfer the data from the existing id column to the new column
Drop the existing id column.
Rename the new column to the original column name
You didn't say what tool you are using to generate your scripts, but if you have a tool like Red-Gate Data Compare, it will generate these statements for you, if you include the auto-increment field in the comparison. I'm not aware that SSMS has any such option.
If you have Visual Studio Premium or Ultimate edition, then you also have access to DBPro (Database Professional) that has a data compare and synchronization option. I believe this will generate the IDENTITY_INSERT statements for you as well.
Related
I want to be able to update a specific column of a table using data from another table. Here's what the two tables look like, the DB type and SSIS components used to get the tables data (btw, both ID and Code are unique).
Table1(ID, Code, Description) [T-SQL DB accessed using ADO NET Source component]
Table2(..., Code, Description,...) [MySQL DB accessed using ODBC Source component]
I want to update the column Table1.Description using the Table2.Description by matching them with the right Code first (because Table1.Code is the same as Table2.Code).
What i tried:
Doing a Merge Join transformation using the Code column but I couldn't figure out how to reinsert the table because since Table1 has relationships i can't simply drop the table and replace it with the new one
Using a Lookup transformation but since both tables are not the same type it didn't allow me to create the lookup table's connection manager (which would be for in my case MySQL)
I'm still new to SSIS but any ideas or help would be greatly appreciated
My solution is based on #Akina's comments. Although using a linked server would've definitely fit, my requirement is to make an SSIS package to take care of migrating some old data.
The first and last are SQL tasks, while the Migrate ICDDx is the DFT that transfers the data to a staging table created during the first SQL task.
Here's the SQL commands that gets executed during Create Staging Table :
DROP TABLE IF EXISTS [tempdb].[##stagedICDDx];
CREATE TABLE ##stagedICDDx (
ID INT NOT NULL,
Code VARCHAR(15) NOT NULL,
Description NVARCHAR(500) NOT NULL,
........
);
and here's the sql command (based on #Akina's comment) for transferring from staged to final (inside Transfer Staged):
UPDATE [MyDB].[dbo].[ICDDx]
SET [ICDDx].[Description] = [##stagedICDDx].[Description]
FROM [dbo].[##stagedICDDx]
WHERE [ICDDx].[Code]=[##stagedICDDx].[Code]
GO
Here's the DFT used (both TSQL and MySQL sources return sorted output using ORDER BY Code, so i didnt have to insert Sort components before the Merge Join) :
Note: Btw, you have to setup the connection manager to retain/reuse the same connection so that the temporary table doesn't get deleted before we transfer data to it. If all goes well, then after the Transfer Staged SQL Task, the connection would be closed and the global temporary table would be deleted.
In Oracle Database (SQL Plus), there is an alternative method to insert values into a table, which my lecturers called "insert by reference". It looks like this:
SQL> INSERT INTO table_name ('&col_name1','&col_name2' ...);
Enter value for col_name1: value1
Enter value for col_name1: value1
...
This enables you to use the same command repeatedly (by pressing up arrow) to enter multiple records in the table; you only need to enter the specific values separately after executing the command. And there is no need to go back to each value, erase it and type in the new value.
So my question is, is there any way to replicate this handy command in MySQL?
This is a feature of sqlplus, not of oracle's, taking advantage of oracle's prepared statement feature.
You need to find or develop an sql client for mysql that can similarly use mysql's prepared statement feature in a nicer way either directly through SQL or through an API (C API is just an example).
We cannot recommend 3rd party tools or utilities here on SO, you need to find the one that best suits your needs.
perhaps use the multi row insert:
insert into table_name (col_name1, col_name2)
values (value_1_1, value_1_2), (value_2_1, value_2_2) [...]
I have an ODBC connection created in MS Access 365. This is used to create a linked table in Access to a view in my SQL DB. I am using a SQL login to authenticate with SQL DB. The login has the datareader role set for that DB
If I make any changes to a record in the linked table in Access, those changes are also made in the SQL DB.
How can I avoid any changes in the Access linked table being propagated into the SQL DB?
You can add a trigger to the view like this:
CREATE TRIGGER dbo.MySampleView_Trigger_OnInsertOrUpdateOrDelete]
ON dbo.MySampleView
INSTEAD OF INSERT, UPDATE, DELETE
AS
BEGIN
RAISERROR ('You are not allow to update this view!', 16, 1)
END
You can also in a pinch add a union query, but it is somewhat ugly - you have to match up the column types. Eg:
ALTER VIEW dbo.MySampleView
as
SELECT col1, col2 FROM dbo.MySampleTable
UNION
SELECT NULL, NULL WHERE 1 =0
You can also create a new schema. Say dboR (for read only).
Add this new schema to the database. Then add the schema to the sql user/logon your using for this database. And then set the permissions to the schema to read only.
You have to re-create the view and choose the read-only schema for this view. On the access client side, I in most cases remove the schema default "dbo_" and just use the table (or view name) - you can thus continue to use the same view name you have been using in Access.
I want to run some setup SQLs before the content of my report is being processed and then at the end run some cleanup SQLs. e.g. some ALTER statements at the beginning and revert the ALTER at the end.
These should be run per report and users will be accessing the reports via the web url of the report server. I wonder if these SQLs can be configured in the report definition file.rdl using BIDS or I can configure this on the SSRS server side or the underlying database. And how?
First I should say that you may not have the best process if you need to ALTER a table back and forth for a query but I know that crazy stuff is sometimes necessary.
You can add DDL statements to your dataset query.
Here's a query for a Dataset I have that creates a Temp table and some other processes before SELECTing the data needed.
CREATE TABLE #TEMP_CENSUS(
GEO_DATA GEOMETRY NOT NULL,
VALUE DECIMAL(12, 4) NOT NULL DEFAULT 0,
NAME NVARCHAR(50) NULL,
GEO NVARCHAR(250) NULL ) ON [PRIMARY]
INSERT INTO #TEMP_CENSUS(GEO_DATA, VALUE, NAME)
exec dbo.CreateHeatMap 20, 25, ...
Unfortunately, you want other operations after your data is selected. For your reverting ALTER statements, you would want to create another dataset using the same source with the alter statements.
In your DataSource, check the Use Single Transaction box so that the two datasets will be performed in order (as they appear in the Dataset list) so your first dataset will ALTER the tables you need then SELECT your data. Then the second query will run to unALTER (re/de -ALTER?) the tables. You may need to add a SELECT of some sort to the second dataset query so it has some data so SSRS doesn't freak out - I haven't had to run any DDL without returning data (yet).
I have a little MS Access application (I know, I know), which accesses a table on a remote MS SQL Server.
I also have a form, which allows the users to enter data into the table. The problem is, that I want the users not to be able to read or modify existing data, but I only want them to enter data and store it (the data is a bit sensitive).
I tried to grant only INSERT privileges to the user connecting to the database, resulting in the error, that the table is not accessible at all.
After googling, I couldn't find anything which would solve this issue.
So my question: How can I ensure, that the users only enter data, but do not modify or read existing data in MS Access (2003)?
I would remove select permissions from the table (as you already have done) and do all the IO through a stored procedure. That way you can control exactly what is inserted into the system
Let me know if you need help running a stored procedure in ADO and I will post something up
I prefer a stored proc, but thought this was an alternate to give access to a view of the table with a check option
create table testview (somevalue varchar(25), entereddate datetime)
go
insert into testview values( 'First Value', getdate() )
go
create view testview_currentonly
as
SELECT
somevalue
, entereddate
FROM testview
WHERE entereddate >= getdate()
with check option
-- end view create
go
insert into testview_currentonly values( 'Second Value', getdate() )
select * from testview_currentonly
select * from testview
You can't select anything from this view because all entries (assuming the user could not manipulate the value going into the 'entereddate' field (probably should have a default?).
For the identity principal you use to access the remote SQL server table (this will be defined in the link), remove all permissions except db_datareader.
You can do this with MS Access permissions (but be warned: it's quite a difficult area...):
Microsoft Access Database Security - Security Permissions
Types of permissions (MDB)
Finally here's what I've done:
First, I created two tables:
CREATE TABLE mydata (...)
CREATE TABLE mydata2 (...)
Then I created an INSTEAD OF trigger:
CREATE TRIGGER mytrigger ON mydata
INSTEAD OF INSERT
AS
INSERT INTO mydata2 SELECT * FROM INSERTED
END
This moved every single entry from mydata to mydata2 on insert. The form in Access remained on mydata though, which made the entries invisible to the user.
Thanks to CodeSlave, who also suggested this solution