SSDT: Unresolved reference to object - sql-server-2014

I have an existing SQL Server 2014 database and I want to add it to source control (SSDT in Visual Studio 2017).
I have a database project with lot's of views and stored procedures.
MyDatabase is current database.
Every view and stored procedure is written in the following way:
create view MyView
as
select
Id
from MyDatabase..MyTable
".." means the default schema name here (dbo). And it works in SQL Server. But SSDT considers such a construct as an error:
View MyView has an unresolved reference to MyDatabase.dbo.MyTable.
So SSDT knows perfectly well, that database is MyDatabase and skipped schema name is dbo.
But I can't build my project with such errors. I can't also rewrite MyDatabase..MyTable to MyDatabase.dbo.MyTable.
So is there any way to solve this problem SSDT?

The 3-part name could be replaced as [$(DatabaseName)]..MyTable:
select Id from MyDatabase..MyTable
=>
select Id from [$(DatabaseName)]..MyTable
Using local 3-part names in programmability objects
While VSTS:DB does not support local 3 part names it does support the use of variables and literals to resolve references to external databases. The $(DatabaseName) variable is an ambient variable that will have its value replaced at the time of deployment. This variable gets its value from the project properties deployment tab. Since $(DatabaseName) is always replaced at deployment with the target database name and references through variables are resolved you may use a variable in your local 3-part names.
Our guidance is to not use local 3-part names as it introduces an unnecessary layer of abstraction and dependency on the database name

Related

How to scope a MySQL JOOQ rename table query to the same database?

I have a scala application that manages multiple MySQL database schemas, which includes modifying (adding, renaming, etc.) tables. The commands are issued over a connection pool that connects to a generic management database in the database server.
Because the application is designed to be cross-database, I use JOOQ to render SQL queries (execution is done via a separate JDBC module).
I experience issues with JOOQs alterTable(...).renameTo(...) DSL - consider the following example:
We have a table "TestTable" in database "TestDatabase". Let's say I want to rename that table simply to "Foo", keeping it in "TestDatabase".
This code:
...
val context = DSL.using(SQLDialect.MYSQL_5_7)
val query = context
.alterTable(table(name("TestDatabase", "TestDatabase")))
.renameTo(name("TestDatabase", "Foo"))
...
Generates: ALTER TABLE `TestDatabase`.`TestTable` RENAME TO `Foo`
However, since the connection pool I'm using is connected to my management database, it just renames the table to "Foo" and moves it to my management database. I would have expected the SQL to be: ALTER TABLE `TestDatabase`.`TestTable` RENAME TO `TestDatabase`.`Foo`. I tried a variety of alternatives to invoke the .renameTo method and convice it to use the fully qualified name, to no avail:
.renameTo(table(name(...) -> same behaviour.
.renameTo("`TestDatabase`.`Foo`") -> Escapes the name with backticks, treats it as one name instead of a qualified name.
I'm wondering if I'm missing something, if this is intended behaviour, or maybe even a bug or design shortcoming of JOOQ.
Is there a way to rename the table using fully qualified names?
Thank you!
That's a bug in jOOQ: https://github.com/jOOQ/jOOQ/issues/8042
Your workaround is close. This doesn't work:
.renameTo("`TestDatabase`.`Foo`")
As you've noticed, behind the scenes, the DSL.name() API is used to wrap the target name, because the renameTo() method doesn't implement the plain SQL templating API. You can, however, explicitly use plain SQL templating by writing as a workaround:
.renameTo(table("`TestDatabase`.`Foo`"))

No columns returned SSIS

I am implementing a SSIS package and currently trying to do the following.
Truncate the destination table
Fetch the data by executing the stored procedure and insert it into the destination table.
I have created an Execute SQL task to address step 1 and dataflow with oledb source and oledb destination to address the second point. It been working successfully so far but isn't working for one my stored procedure that uses temp tables.
When I edit the oledb source and click the preview button, I get the error no column returned
I know that SSIS has an issue with generating column while executing stored procedures that depend on temp tables. I have converted the stored proc to use temporary table variables and its now able to return columns in SSIS when I do a preview. The only downside is that the stored procedure is taking longer time to execute. Its taking 1 hour 15 mins as compared to 15 mins while using temp tables.
I did see a suggestion to use SET FMTONLY before executing the stored procedure as an alternate solution to changing to temp table variables but that didn't seem to work as I am getting syntax or permission denied error.
Could somebody tell me a solution to my problem which does not compromise on the performance.
Sounds like you've already read all the approaches to using Temp tables in SSIS, including the IF 1=0... trick? If you haven't seen that one yet, google it.
You say that using Table Variables causes your stored procedure to take about 5 times longer than using Temp Tables. The most likely reason for that is that you are indexing your temp tables but not your table variables. If you didn't know that table variables can be indexed, they can. You might try that.
Finally, a solution that you haven't mentioned is that you can replace your temporary table with a real table that gets truncated when you're done using it.
Short comment:
Try EXEC WITH RESULT SETS and specify the metadata yourself for a proc with temp tables; or use the Script Component as a source and specify the Output columns yourself.
Long comment:
Technically speaking, it is the driver/database you are using in SSIS that would decide the behavior when working with temp tables.
Metadata is an important factor when using SSIS's pipeline components. By metadata, I mean the names of the columns, their data types etc that a pipeline component uses. When designing a data flow, someone/something should provide this metadata to the components that require it.
In most cases, SSIS automatically retreives the metadata. Components that do not connect to a external data source, like Conditional Split etc, get their metadata from the other components they are connected to. For the pipeline components that connect to a external data source (like Oledb source, oledb destination, Lookup etc.), SSIS provides a mechanism to get this metadata without human involvement. This mechanism involves the driver connecting to the database and retrieving the metadata of the output. If the driver/database is capable of returning the metadata, then that metadata is used. If the driver/database is incapable, then you get the errors you are seeing. The rest of my comments are based on the assumption that you are using a SQL Server database in your question.
When working with a SQL Server database in SSIS, typically, we use the native client drivers provided by Microsoft. When trying to get the metadata, these drivers try to get the metadata without actually executing the SQL Statement (actual execution can have side effects; and also, might take more than a few seconds/minutes/hours; and you dont want side effects and long wait times during package design time.) So to get the metadata, the driver relies on the metadata of the actual objects used in the sql command. If the command uses a physical table or view, SQL Server already has the metadata available and can supply it to the driver. If it is a temp table, SQL Server does not have the metadata until it can create the temp table. If using FMT ONLY option, you can use it in such a way to create the temp tables, but avoid any heavy processing/side affects and thus be able to retrieve metadata without penalties. Post 2012, these native client drivers rely on some newer functionality to retrieve metadata than the drivers before 2012. In 2012 and after, the driver uses the sp_describe_first_result_set proc to retrieve metadata. So, whether you can get metadata or not is determined by the ability of the sp_describe_first_result_set proc.
So while SSIS can automatically get the metadata (because of the driver/database), it does not automatically get the metadata in some cases (again because of the driver/database). In cases involving the second scenario, some other process (typically a human) can help the driver infer metadata or provide the metadata to the component directly.
To help the driver, in case of SQL Server 2012 and after, you can use the WITH RESULTSETS clause to specify the output metadata. When this clause is present, the driver will use it and doesnt try to query the metadata from system objects; and thus avoid the error which you would otherwise get. If you are using the drivers that came with SQL Server 2008, you can use FMT ONLY. This option is at the driver/database level.
Another option could be to use a Script Component as the Source and in the Output columns, you can specify the columns/metadata. SSIS would not try to retrieve metadata from the datasource in this case, but would rely on the definitions you provided in the Output section of the Script Component.
As you can see, both options involve a human (or some other process) specifying the metadata instead of SSIS trying to retrieve the metadata in an automated fashion. I would prefer the first option if working with SQL Server and the second option if working with databases like MySql.

SSIS - check if database and tables exist, if not - run sql to create

I am planning on importing data into Azure SQL database using SSIS package. I know I can do that with OLEDB Source and Destination but I also want to check if the database and tables exist and if not - create them. I am planning on using Execute SQL task to create database and tables, but how do I first check if they already exist?
So if database and tables exist, I will run data flow task to transfer the data, but if they do not exist - run Execute SQL task to create database and tables and then run data flow task.
How can I accomplish that?
Create an OLE DB connection manager to the server and master database on it. Apply this connection manager for the next two steps.
Create two SQL tasks in a container. The first SQL task will check to see if the database exists. You can pass the database name as a variable to it and apply it in the SQL example like that shown below. The "?" is the database name variable.
IF NOT EXISTS(SELECT * FROM sys.sysdatabases where name=?)
-- create database
Then for the second SQL task apply something like the following in which the database and table name are passed as variables. But, in difference to the previous SQL, you can apply an expression for defining the SQL.
"IF OBJECT_ID(N'" + <#DatabaseName> + "dbo." + <#tablename> + ", N'U') IS NULL
CREATE TABLE " + <#DatabaseName> + "dbo." + <#tablename> + "
(
Field1 VARCHAR(20) NOT NULL
,Field2 TINYINT NOT NULL
);"
I'm not very familiar with Azure or SSIS, but in SQL Server you can check to see if an object exists like this:
IF OBJECT_ID('dbo.UserTable', 'U') IS NULL
-- Doesn't Exist.
I hope this helps in some way.
SSDT (the database tool, not the SSDT-BI product suite) uses a declarative model to manage and deploy database schemas. You can get SSDT here.
Once you create a new SQL Server Database Project, you can import an existing database schema. In the database properties, you will define your target database as Azure. When you build the project, it creates a dacpac file as output. This file is an archive that contains the definition of your database.
SSDT comes with a utility called sqlpackage.exe, which you can learn about here. You can then pass in your dacpac file and target server and have the utility "publish" the database. If the database and schema are there are match up, it will do nothing. If it does not exist all objects will get created. If the schema is different than expected, it will update the schema according to how it is defined in the project.
SSDT has loads more benefits than this, but for your case, it will simplify the process and make it far more maintainable.
BTW, if the database does not exist, you may need to put delay validation on your connection managers and tasks in the rest of the package so it does not fail validation. Or better, create a master package that first executes the sqlpackage process task and then calls your loading package as a child package.

Settting up a SSMS Linked Server to DB2

I have a local SQL Server instance on which I created a Linked Server connection to a DB2 database named "DB2OurDatabase." In creating the Linked Server connection, I specified a UID and PWD that I use in various query tools or applications to query "[SchemaX].[TableX]."
I seemed to have success in creating the Linked Server: A Linked Server node Object by the name of "DB2OurDatabase" was created under the Linked Server node in SSMS and when I expand it, I am able to see the of tables in the database.
When I right mouse click on the [SchemaX].[TableX] table and select
"Script Table as => Select To ==> New Window", a new query window was opened with the text
--[DB2OurDatabase].[DataCenterCityName2_DB2OurDatabase].[SchemaX].[TableX]
contains no columns that can be selected or the current user does not have permissions on that object.
GO
I don't understand how I was able to create a Linked Server that can see the table names in the database but yet apparently seem to encounter what appears to be a lack of rights to query the table even tghough I am using same credentials that I have used in Squirell SQL query tool, for example, to query the table.
In SSMS, I tried to execute this
SELECT *
FROM [DB2OurDatabase].[DataCenterCityName2_DB2OurDatabase].[SchemaX].[TableX]
Error:
Msg 7314, Level 16, State 1, Line 1
The OLE DB provider "IBMDADB2" for linked server "DB2OurDatabase]" does not contain the table ""DataCenterCityName2_DB2OurDatabase"."SchemaX"."TableX"". The table either does not exist or the current user does not have permissions on that table.
I was a little surprised that the fully qualified table name included [DataCenterCityName2_DB2OurDatabase] since I did not specify this when I set up the Linked Server connection, but the name of the DataCenter city was correct so I took this as a further sign that the Linked Server connection was successful.
Nevertheless, I also tried to execute remove this level of the fully qualified table name:
SELECT *
FROM [DB2OurDatabase].[SchemaX].[TableX]
which resulted in this error.
Msg 208, Level 16, State 1, Line 1
Invalid object name 'DB2OurDatabase.[SchemaX].[TableX]'.
What do I need to do to create a DB2 Linked Server that lets me query the tables in the DB2 database? Here's my linked server properties:
I haven't investigate what are probably multiple ways to conenct and query DB2 from Sql Server, but this worked:
SELECT * FROM OPENQUERY(DB2OurDatabase, 'SELECT * FROM SchemaX.TableX')
Obviously, you modified the actual commands by replacing object names, so it's impossible to be sure, but the problem may be caused by your use of quoted identifiers (those square brackets), which essentially makes the object names case-sensitive. DB2 will by default create object (table, schema) names in uppercase, unless they are quoted. create table MySchema.MyTable... (unquoted) on the DB2 side will create the table MYSCHEMA.MYTABLE, and referencing it later from SSMS as [MySchema].[MyTable] (using quoted identifiers) will obviously fail.
These are the 3 steps that bring me to the solution:
Download and install Microsoft OLE DB Provider for DB2 Version 6.0 (this is the latest version but by the time you read this post there might be a new version now)
From the start menu open the Data Access Tool > File > New Data Source and complete all the steps: provide the notorious credentials like Server, Port, Database, User, Password. If unsure contact your DBA. Once completed test the connection and copy the Connection String
Now on SSMS go to Server Object > Linked Servers > New Linked Server and fill up like in picture setting up in the Provider string the string you copied before from the Data Access Tool:
Done, you are good to go now,

Automating tasks on more than one SQL Server 2008 database

We host multiple SQL Server 2008 databases provided by another group. Every so often, they provide a backup of a new version of one of the databases, and we run through a routine of deleting the old one, restoring the new one, and then going into the newly restored database and adding an existing SQL login as a user in that database and assigning it a standard role that exists in all of these databases.
The routine is the same, except that each database has a different name and different logical and OS names for its data and log files. My inclination was to set up an auxiliary database with a table defining the set of names associated with each database, and then create a stored procedure accepting the name of the database to be replaced and the name of the backup file as parameters. The SP would look up the associated logical and OS file names and then do the work.
This would require building the commands as strings and then exec'ing them, which is fine. However, the stored procedure, after restoring a database, would then have to USE it before it would be able to add the SQL login to the database as a user and assign it to the database role. A stored procedure can't do this.
What alternative is there for creating an automated procedure with the pieces filled in dynamically and that can operate cross-database like this?
I came up with my own solution.
Create a job to do the work, specifying that the job should be run out of the master database, and defining one Transact-SQL step for it that contains the code to be executed.
In a utility database created just for the purpose of hosting objects to be used by the job, create a table meant to contain at most one row, whose data will be the parameters for the job.
In that database, create a stored procedure that can be called with the parameters that should be stored for use by the job (including the name of the database to be replaced). The SP should validate the parameters, report any errors, and, if successful, write them to the parameter table and start the job using msdb..sp_start_job.
In the job, for any statement where the job needs to reference the database to be replaced, build the statement as a string and EXECUTE it.
For any statement that needs to be run in the database that's been re-created, doubly-quote the statement to use as an argument for the instance of sp_executesql IN THAT DATABASE, and use EXECUTE to run the whole thing:
SET #statement = #dbName + '..sp_executesql ''[statement to execute in database #dbName]''';
EXEC (#statement);
Configure the job to write output to a log file.