I am tryng to insert a new user into a simple table that contains ID and Name.
This is the query that I execute:
INSERT INTO [dKArchive].[dbo].[Logins]
([IDL]
,[Name])
VALUES
(37
,'pippo.paperino')
GO
I am using Microsoft SQL Server Management Studio.
After executing query, the value is added in the table, after when I close the Microsoft SQL Server Management Studio and reopen it, this data disappears.
Why does it happen?
Thanks. Best regards.
Second attempt at answering:
SSMS has an option to set that every query is a transaction. You find this under
Tools...->Options->Query Execution->SQL Server->ANSI->SET IMPLICIT_TRANSACTION
If this option is on, you will get something close to the behaviour you describe. However, when closing SSMS, you will get a warning about uncommitted transactions. Ignoring this warning is not a good idea.
At that time, you can click Commit and all is well, or you can just turn off this option.
Related
I have this problem: I'm using a SQL Server 2008R2 backend and MS Access 2000 frontend where some tables are connected via ODBC.
Following Structure (Tables all on SQL-Server):
Import (not connected to Access)
Products (connected via ODBC to Access)
Pricing (connected via ODBC to Access)
I want to fill the Pricing table automatically with some data from Products and Import. This is supposed to run as a SQL Agent job with a T-SQL script.
I want to insert the data from "Products" with following command:
INSERT INTO Pricing (Productnr, Manufacturernr)
(SELECT Productnr, Manufacturernr
FROM Products
WHERE Valid = 1
AND Productnr NOT IN (SELECT Productnr FROM Pricing ));
Right after that the inserted rows are locked for Access, I can't change anything. If I execute sql queries with SQL Server Management Suite or if i start queries as SQL Agent jobs everything works fine.
Why are the rows locked in ms access after the query ran (even if it finished successfully)? and how can I unlock them or make it unlock itself right after the query/job ran?
Thanks
When SQL Server inserts new rows, those new rows are in fact exclusively locked to prevent other transactions from reading or manipulating them - that's by design, and it's a good thing! And it's something you cannot change - you cannot insert without those locks.
You can unlock them by committing the transaction that they're being inserted under - once they're committed to SQL Server, you can access them again normally.
The error message i get says, that the dataset has been changed by another user and if i save it, i would undo the changes of the other user. (and asks me for copying into clipboard).
This is different from "locked", and completely normal.
If you have a ODBC linked table (or form based on the table) open, and change data in the backend, Access doesn't know about the change.
You need to do a full requery (Shift+F9) in Access to reload the data, afterwards all records can be edited again.
Got the solution for my Problem now.
I had to add a timestamp row into the pricing table, so access could recognize the change.
Access loads the data into the front end when the table is first accessed. If something in the backend changes the data, you need Access to refresh it first, before you can edit it from the front end (or see the changes).
Do this by (in Access) by closing and reopening the table, or switching to the table and pressing shift-F9 as Andre suggested, or programmatically using a requery statement. You must requery, not refresh, for it to release the locks and register the changes made in SQL.
I deleted a table in SQL Server Management Studio. And then I created a new table with the same name. But the error said that the table has already exists. I want a new completely table with the same name.
Edit: Added more answers after comment.
Are you removing the table from the database diagram or from the object explorer? If you are removing the table from your database diagram using Visual Database tools, it will still exist in the database.
From MSDN:
The table is removed from your diagram but it continues to exist in
the database.
OR,
Try going to Tools->Options->Designers and unchecking the box that says "Prevent saving changes that require table re-creation". Then try deleting and creating the database.
OR,
Delete the table. Close MS SQL Management Studio. Open MS SQL Management Studio again. Create table.
Old Answer
Are both statements
in the same batch? From the Microsoft support page for DROP TABLE:
DROP TABLE and CREATE TABLE should not be executed on the same table in the same batch. Otherwise an unexpected error may occur.
If this isn't the case I'll try to help otherwise. If I can't help otherwise I'll just delete this answer.
Basically you have one of it open in tab where you get new table...
Steps
Close all the tabs
When no tabs are open
Create you table
And there you are.... It worked for me rather than creating the whole database again.
Clock on Tools -> Options -> Designers -> UNcheck Prevent table changes from table re-creation check box.
I'm trying to use the SQL Server 2008 Change Tracking feature. Once the feature is enabled, you can make use of the CHANGETABLE(... function to query the change tracking history that is kept internally by SQL Server, e.g.:
SELECT
CT.ID, CT.SYS_CHANGE_OPERATION,
CT.SYS_CHANGE_COLUMNS, CT.SYS_CHANGE_CONTEXT
FROM
CHANGETABLE(CHANGES dbo.CONTACT,20) AS CT
where the SYS_CHANGE_CONTEXT column records the CONTEXT_INFO() session value. This column is useful for auditing who changed what etc.
Some of the statements that change data are executed using four-part notation by a remote SQL Server that has the home server as a linked server e.g.:
INSERT INTO [home server].[db name].[dbo].[CONTACT](id) values(#id)
My problem is that the CONTEXT_INFO() as set on the remote server in the session executing the query does not get picked up in my home server change tracking, i.e. it doesn't look like the CONTEXT_INFO spans a distributed query. This means that the following will not result in the CONTEXT_INFO being logged on the home server change tracking.
-- I'm running on a remote server
WITH CHANGE_TRACKING_CONTEXT (0x1256698477)
INSERT INTO [home server].[db name].[dbo].[CONTACT](id) values(#id)
Does anyone know whether this is a limitation or if there is a way to persist/communicate CONTEXT_INFO across the distributed query?
Thanks
I was thinking about using Context_Info to audit changes (web app). but after doing some tests understood its not good idea. Because of connection pooling context_info was not working the way i desired.
Ended up with using GUID identifier associated with each logical session + table, where is stored session GUID and information related to session + each table stores that identifier in separate column. Not as easy to code as it would be with context_info()..
And as far as i understood from documentation, change tracking is not designed for audit purposes (think that is what you trying to do).
I am writing an SSIS package that has a conditional split from a SQL Server source that splits records to either be updated or inserted into a MYSQL database.
The SQL Server connection has provider .NET Provider for OldDB\SQL Server Native Client 10.0.
The MYSQL connection is a MYSQL ODBC 5.1 ADO.NET connection.
I was thinking about using the OLE DB Command branching off of the conditional split to update records but I connect use this and connect to the MYSQL database.
Does anyone know how to accomplish this task?
I would write to a staging table for updates including the PK and columns to be updated and then execute an UPDATE SQL statement using that table and the table to be updated. The alternative is to use the command for every row and that just doesn't seem to perform that well in my experience - at least compared to a nice fat batch insert and a single update command.
For that matter, I guess you could do without the conditional split altogether, write everything to a staging table and then use an UPDATE and INSERT in SQL back to back.
Probably, the following MSDN blog link might help you. I haven't tried this.
How do I UPDATE and DELETE if I don’t have an OLEDB provider?
The post suggests the following three options.
Script Component
Store the data in a Recordset
Use a custom component (like Merge destination component)
The author also had posted two other articles about MySQL prior to posting the above article.
Connecting to MySQL from SSIS
Writing to a MySQL database from SSIS
Hope that points you in the right direction.
Given:
A table named Table1 that has the following columns:
ID
ColumnA
ColumnB
Typing Table1. in Microsoft SQL Server Management Studio provides me with a list of columns for that table.
Scenario:
I open up Table1 in the design view and add ColumnC to it. I save Table1 and refresh it to see the new column, Column3 show up in the Object Explorer.
Going back to the Query Window, I type Table1. but Column3 is not available to be selected. Typing it out gives me a syntax error but running a query with the column in it works as expected.
Is there a menu item somewhere that I need to click to get Intellisense to pick up the DDL changes I have made?
Edit -> Intellisense - Refresh Local Cache
That should do it.
Ctrl-Shift-R is the shortcut.
In addition to refreshing the cache you also need to do the following if you haven't already:
Go to Tools >> Options >> Text Editor >> Transact-SQL >> General >> IntelliSense
Check the box Auto List Members and also the box Parameter Information save and restart.
I also highly recommend the Redgate SQL Toolbox if you regularly use SQL Server. SQL Compare and SQL Data Compare and the SQL Prompt5 have saved me lots of time in development.
I have to restart management studio when this happens. Refreshing object explorer doesn't update the intellisense.