SQL Server 2014: Re-Create System view master.dbo.spt_values - sql-server-2014

On my test SQL Server 2014 installation, I was "cleaning" the master database.
With the following command, I was checking which user objects there are:
SELECT
'DROP ' +
CASE
WHEN [sys].[all_objects].type IN ('AF','FN','FS','FT','IF','TF') THEN 'FUNCTION '
WHEN [sys].[all_objects].type IN ('D','C','F','PK','UQ') THEN 'CONSTRAINT '
WHEN [sys].[all_objects].type IN ('IT','S','U') THEN 'TABLE '
WHEN [sys].[all_objects].type IN ('P','PC','RF','X') THEN 'PROCEDURE '
WHEN [sys].[all_objects].type IN ('TA','TR') THEN 'TRIGGER '
WHEN [sys].[all_objects].type = 'R' THEN 'RULE '
WHEN [sys].[all_objects].type = 'SN' THEN 'SYNONYM '
WHEN [sys].[all_objects].type = 'TT' THEN 'TYPE '
WHEN [sys].[all_objects].type = 'V' THEN 'VIEW '
END +
SCHEMA_NAME(sys.[all_objects].[schema_id]) + '.' + OBJECT_NAME(object_id) + '; ' as [Command],
OBJECT_NAME(object_id) as [ObjectName],
[sys].[all_objects].[type_desc] as [TypeDesc],
[sys].[all_objects].[type] as [Type],
SCHEMA_NAME(sys.[all_objects].[schema_id]) as [Schema]
FROM
sys.[all_objects] WITH (NOLOCK)
WHERE SCHEMA_NAME(sys.[all_objects].[schema_id]) like '%dbo%'
One of the results was the view spt_values.
Command | ObjectName | TypeDesc | Type | Schema
------------------------|------------|----_-----|------|-------
DROP VIEW dbo.spt_values; spt_values VIEW V dbo
As it was not one of the views I knew, I deleted it (along with other objects).
Later that day, I wanted to check the properties of a database in SSMS 2016 and got the following error:
After some searching, I found that I could recreate the missing view with the script u_tables.sql (which is in the SQL Server installation folder on your server). Information from here: https://ashishgilhotra.wordpress.com/tag/u_tables-sql/
The code in that script to create the view is the following:
create view spt_values as
select name collate database_default as name,
number,
type collate database_default as type,
low, high, status
from sys.spt_values
go
EXEC sp_MS_marksystemobject 'spt_values'
go
grant select on spt_values to public
go
Already when looking at the code, I doubted that it would work, as there is no sys.spt_values table anywhere to be found.
As expected I get the error
Msg 208, Level 16, State 1, Procedure spt_values, Line 6
Invalid object name 'sys.spt_values'.
On my other server with SQL Server 2008 on it, there is a table master.dbo.spt_values (but no view)!
After some more searching, I found that I could just create a table with the same name.. Link here https://www.mssqltips.com/sqlservertip/3694/fix-invalid-object-name-masterdbosptvalues-when-viewing-sql-server-database-properties/
Now I create a table with the values from another SQL Server 2014 installation, and everything seems to be working again.
But, it is not correct!
When I check the new created object on the test server with this command
select [name] , [type], [type_desc]
from sys.objects
where name like 'spt_v%'
It shows a user_table object. On my other server, it shows a view...
So, my question is: How can I create the view spt_values which gets its data from a table spt_values?

Ok, after some fiddling arround, I found the solution..
The table sys.spt_values is in the ressources database (mssqlsystemresource). This database is only accessible when the SQL Service is started in single user mode..
To re-create the view I had to do the following steps:
Stop all SQL Services
2. Start the SQL Service in single user mode
Open a DOS Command prompt and start the sqlservice with the switch -m
sqlservr.exe -sSQLT01 –m
Connect SSMS to the instance
Just connect the query window, but not the Object Explorer window. The service only accepts one single connection! If there is a problem, you can see it in the DOS Window where the service is running.
Delete the wrong table spt_values
As I created a table spt_values on the master database, I have to delete it first
use master
go
drop table dbo.spt_values
5. Create the view
Now I finally can create the view dbo.spt_values, which points to the table sys.spt_values
use master
go
create view spt_values as
select name collate database_default as name,
number,
type collate database_default as type,
low, high, status
from sys.spt_values
go
EXEC sp_MS_marksystemobject 'spt_values'
go
grant select on spt_values to public
go
6. Check the dbo.spt_values object
use master
select schema_name(schema_id), object_id('spt_values'), *
from sys.objects
where name like 'spt_v%'
It should show a view now
Query the view dbo.spt_values and the table sys.spt_values
Just for the fun of it... You can now query the table sys.spt_values, which is in the ressources database
use mssqlsystemresource
Select * from sys.spt_values
And you can query the view dbo.spt_values, which is in the master database
use master
Select * from dbo.spt_values
8. Restart the services
You can now quit the DOS window with the SQL Service running and start the SQL Services. Or you just restart the whole server
Hope this post will help others in the future

The u_tables.sql script will create master.dbo.spt_values, but I had to run it with the Dedicated Administrator Connection (DAC).
Solution:
Step 1: Open the Command Prompt as Admin
Step 2: Run the following:
sqlcmd -S -U sa -P -A -i "C:\Program Files\Microsoft SQL Server\MSSQL14.SQL2017\MSSQL\Install\u_tables.sql"
Swap out and with your values. If you don't have the password to sa you will need to find a user with sufficient rights (and replace 'sa' with your user name).
The -A runs the command as the DAC. This should be used sparingly. See MS documentation on the DAC.
Find the u_tables.sql file in your installation directory. The path above is where it is on my machine with SQL 2017 installed in the default location on the C: drive.

Related

Generate DDL from 4D database

I have inherited a 4D database that I need to extract all the data from to import to another relational database. The 4D database ODBC driver seems to have quite a few quirks that prevents it from being used as a SQL Server linked server. I can give the gory details if anyone wants them but suffice to say; it's not looking like a possibility.
Another possibility I tried was using the MS SQL Server Import Data wizard. This is, of course, SSIS under the covers and it requires the 32 bit ODBC driver. This gets part of the way but it fails trying to create the target tables because it doesn't understand what a CLOB datatype is.
So my reasoning is that if I can build the DDL from the existing table structure in the 4D database I might be able to just import the data using the Data Import wizard if I create the tables first.
Any thoughts on what tools I could use to do this?
Thanks.
Alas, the 4D ODBC drivers are a (ahem) vessel filled with a fertiliser so powerful that none may endure its odour...
There is no simple answer but if you have made it here, you are already in a bad place so I will share some things that will help.
You can use the freeware ODBC Query Tool that can connect to the ODBC through a user or system DSN with the 64 bit driver. Then you run this query:
SELECT table_id, table_name,column_name, data_type, data_length, nullable, column_id FROM _user_columns ORDER BY table_id, column_id limit ALL
Note: ODBC Query Tool fetches the first 200 row pages by default. You need to scroll to the bottom of the result set.
I also tried DataGrip from JetBrains and RazorSQL. Neither would work against the 4D ODBC DSN.
Now that you have this result set, export it to Excel and save the spreadsheet. I found the text file outputs to be not be useful. They are exported as readable text, not CSV or tab delimited.
I then used the Microsoft SQL Server Import Data Wizard (which is SSIS) to import that data into a table that I could then manipulate. I am targeting SQL Server so it makes sense for me to make this step but if you importing to another destination database, you may create the table definitions from the data you now have whatever tool you think is best.
Once I had this in a table, I used this T-SQL script to generate the DDL:
use scratch;
-- Reference for data types: https://github.com/PhenX/4d-dumper/blob/master/dump.php
declare #TableName varchar(255) = '';
declare C1 CURSOR for
select distinct table_name
from
[dbo].[4DMetadata]
order by 1;
open C1;
fetch next from C1 into #TableName;
declare #SQL nvarchar(max) = '';
declare #ColumnDefinition nvarchar(max) = '';
declare #Results table(columnDefinition nvarchar(max));
while ##FETCH_STATUS = 0
begin
set #SQL = 'CREATE TABLE [' + #TableName + '] (';
declare C2 CURSOR for
select
'[' +
column_name +
'] ' +
case data_type
when 1 then 'BIT'
when 3 then 'INT'
when 4 then 'BIGINT'
when 5 then 'BIGINT'
when 6 then 'REAL'
when 7 then 'FLOAT'
when 8 then 'DATE'
when 9 then 'DATETIME'
when 10 then
case
when data_length > 0 then 'NVARCHAR(' + cast(data_length / 2 as nvarchar(5)) + ')'
else 'NVARCHAR(MAX)'
end
when 12 then 'VARBINARY(MAX)'
when 13 then 'NVARCHAR(50)'
when 14 then 'VARBINARY(MAX)'
when 18 then 'VARBINARY(MAX)'
else 'BLURFL' -- Put some garbage to prevent this from creating a table!
end +
case nullable
when 0 then ' NOT NULL'
when 1 then ' NULL'
end +
', '
from
[dbo].[4DMetadata]
where
table_name = #TableName
order by column_id;
open C2;
fetch next from C2 into #ColumnDefinition;
while ##FETCH_STATUS = 0
begin
set #SQL = #SQL + #ColumnDefinition;
fetch next from C2 into #ColumnDefinition;
end
-- Set the last comma to be a closing parenthesis and statement terminating semi-colon
set #SQL = SUBSTRING(#SQL, 1, LEN(#SQL) - 1) + ');';
close C2;
deallocate C2;
-- Add the result
insert into #Results (columnDefinition) values (#SQL);
fetch next from C1 into #TableName;
end
close C1;
deallocate C1;
select * from #Results;
I used the generated DDL to create the database table definitions.
Unfortunately, SSIS will not work with the 4D database ODBC driver. It keeps throwing authentication errors. But you may be able to load this database with your own bespoke tool that works with the ODBC weirdness of 4D.
I have my own tool (unfortunately I cannot share it) that will load the XML exported data directly to the database. So I am finished.
Good luck.
Boffin,
Does "inherited a 4D database" mean it's running or that you have the datafile and structure but can't open it?
If it's running and you have access to the user environment the easy thing to do is simply use 4D's export functions. If you don't have access to the user environment the only option for ODBC would be if it's designed to allow ODBC or if the developer provided some export capability.
If you can't run it you won't be able to directly access the datafile. 4D uses a proprietary datastructure and it changed from version to version. It's not encrypted by default so you can actually read/scavage the data but you can't just build a DDL and pull from it. ODBC is a connection between the running app and some other source.
Your best bet will be to contact the developer and ask for help. If that's not an option get the thing running. If it's really old you can contact 4D to get a copy of archived versions. Depending on which version it is and how it's built (compiled, interpreted, engined) your options vary.
[Edit] The developer can specify the schema that's available through SQL and we frequently limit what's exposed either for security or usability reasons. It sounds like this may be the case here - it would explain why you don't see the total structure.
This can also be done with the native 4D structure. I can limit how much of the 4D structure is visible in user mode on a field by field/table by table basis. Usually this is to make the system less confusing to users but it's also a way to enforce data security. So I could allow you to download all your 'data' while not allowing you to download the internal elements that make the database to work.
If you are able to export the data you want that sounds like the thing to do even if it is slow.

Dynamic access to a server by server name

In SQL Server (i'm using 2008) is it possible to dynamically access server by server name?
My scenario: I have a production server, a development server, and a test server. Their structure is the same. There is a fourth server with some additional data - let's call it a data server.
On the data server there is a procedure. One of it's parameters is a name of the requesting server:
proc sp_myProcedure(#myId int, #serverName nvarchar(100))
The procedure accesses tables from the data server and from the requesting server. At the moment, to query the requesting server I'm using a case expression:
-- code on the data server
select additionalData = case #serverName
-- if the requesting server is production - query production
when 'ProdServer' then (select field1 from [ProdServer].[MyDataBase].[dbo].[MyTable] ...
-- if the requesting server is test - query test
when 'TestServer' then (select field1 from [TestServer].[MyDataBase].[dbo].[MyTable] ...
-- if the requesting server is development - query development
when 'DevServer' then (select field1 from [DevServer].[MyDataBase].[dbo].[MyTable] ...
end
My question is if there is any other way to access the requesting server. I'd like to replace ifs and cases with something more dynamic. Is it, for instance, possible to use the server name variable to dynamically access specific server. Something similar to the following (mocked) query:
declare myServer <server type> = Get_Server(#serverName)
-- the query
additionalData = select field1 from [myServer].[MyDataBase].[dbo].[MyTable]
I liked this approach
SELECT
SERVERPROPERTY('MachineName') AS [ServerName],
SERVERPROPERTY('ServerName') AS [ServerInstanceName],
SERVERPROPERTY('InstanceName') AS [Instance],
SERVERPROPERTY('Edition') AS [Edition],
SERVERPROPERTY('ProductVersion') AS [ProductVersion],
Left(##Version, Charindex('-', ##version) - 2) As VersionName
Link
Another approach which we were using was
Creating one database called database_yourprojectname
So, for the explanation I'm using database name as northwind
after that you can create one new database called northwind_db
Which has a following fields:
Servername,username(encrypted),password(encrypted),active
And then you can either make one page to insert/update/delete current database used there
or you can add statically data to it..so, you can use the database which is active currently.
Or use simple one:
SELECT ##SERVERNAME
Which is already stated here

With pyodbc on SQL Server, create table using select * into table from openrowset(BULK ... ) has no effect

Environment: Windows 64bit, python 2.7.8 (32 bit), pyodbc v. 3.0.7
For what I need to do, I cannot use linkedservers, per internal policy.
Using python, I'm attempting to:
Export a table's data (*.dat) and its structure (format - *.fmt)
from one Sql Server (either 2008 or 2012) using bcp. The exported
files sit on my local machine. I make 2 BCP calls: one to get the
format (.fmt) file, another to get the data (.dat) (could not
find a way to do both in one step)
Import data of the given table into a database - MyDatabase - where I have full permission (per DBA's claim), and on either the same SQL server but different database, or on another server altogether. Here, my primary goal is to automate both the creation of the table to be imported (based on the exported fmt file), and the actual importing of its data.
I've got point 1 working, where I can dynamically specify the server, catalog, schema, and table to export, and python automagically creates the table.dat and table.fmt files on my local machine under a dedicated folder - DedicatedShareFolder
DedicatedShareFolder is a shared folder on my local machine that stores the exported tables and their fmt files. It is accessible to the SQL Servers I'm trying to import those tables into.
In point 2, I used python to build a SQL statement as follows:
sql = "select * into %s from
openrowset(BULK '\\\\%s\\sqltemp\\%s.dat',
FORMATFILE = '\\\\%s\\sqltemp\%s.fmt') as A" %(newTableName,os.environ['COMPUTERNAME'],
table,os.environ['COMPUTERNAME'],table)
Which ends up looking like:
select A.* into MyDatabase.dbo.blah48 from
openrowset(BULK '\\MyMachineName\DedicatedShareFolder\table.dat',
FORMATFILE = '\\MyMachineName\DedicatedShareFolder\table.fmt') as A;
I create a connection to the SQL server that has MyDatabase, and execute:
cnxn = pyodbc.connect('DRIVER={SQL Server};SERVER=%s;DATABASE=%s;UID=%s;PWD=%s' % (server, catalog, login, pw))
cursor = cnxn.cursor()
rows = cursor.execute(sql).rowcount
print "Done importing %s rows" %rows
I get:
Done importing 606597 rows
Alas, the table was not created
I ran a trace on my local machine using 'ODBC Data Source Administration''s 'Tracing' tab. I opened the log file, and could not find any error pertaining to the creation of the table. I do see entries like this:
test_DB 5200-5a28 EXIT SQLDriverConnectW with return code 1 (SQL_SUCCESS_WITH_INFO)
HDBC 0x03C59190
HWND 0x00000000
WCHAR * 0x74C28B34 [ -3] "******\ 0"
SWORD -3
WCHAR * 0x74C28B34 <Invalid buffer length!> [-3]
SWORD -3
SWORD * 0x00000000
UWORD 0 <SQL_DRIVER_NOPROMPT>
DIAG [01000] [Microsoft][ODBC SQL Server Driver][SQL Server]Changed database context to 'mydatabase'. (5701)
DIAG [01000] [Microsoft][ODBC SQL Server Driver][SQL Server]Changed language setting to us_english. (5703)
DIAG [01S00] [Microsoft][ODBC SQL Server Driver]Invalid connection string attribute (0)
I suspect 'Invalid buffer length' has no effect here (couldn't find info on it in my search), and I do see driver changing context and language successfully.
I also see few entries in trace log like this:
test_DB 50c0-4fec EXIT SQLGetTypeInfo with return code -1 (SQL_ERROR)
HSTMT 0x03AC8E80
SWORD 12 <SQL_VARCHAR>
DIAG [24000] [Microsoft][ODBC Driver Manager] Invalid cursor state (0)
Nothing significant turned out in my search that would indicate 'invalid cursor state' has any effect here. Above entry could be due to other parts of code I'm still working on.
further down the trace log, I see:
test_DB 50c0-4fec ENTER SQLExecDirect
HSTMT 0x03ACB5B0
UCHAR * 0x025BBB94 [ -3] "select A.* into MyDatabase.dbo.blah48 from
openrowset(BULK '\\MyMachineName\DedicatedShareFolder\table.dat',
FORMATFILE = '\\MyMachineName\DedicatedShareFolder\table.fmt') as A\ 0"
SDWORD -3
test_DB 50c0-4fec EXIT SQLExecDirect with return code 0 (SQL_SUCCESS)
HSTMT 0x03ACB5B0
UCHAR * 0x025BBB94 [ -3] "select A.* into MyDatabase.dbo.blah48 from
openrowset(BULK '\\MyMachineName\DedicatedShareFolder\table.dat',
FORMATFILE = '\\MyMachineName\DedicatedShareFolder\table.fmt') as A\ 0"
SDWORD -3
test_DB 50c0-4fec ENTER SQLRowCount
HSTMT 0x03ACB5B0
SQLLEN * 0x0027EFD4
test_DB 50c0-4fec EXIT SQLRowCount with return code 0 (SQL_SUCCESS)
HSTMT 0x03ACB5B0
SQLLEN * 0x0027EFD4 (606597)
trace file indicates table was created. Alas it wasn't.
Out of desperation, I looked under every database on the server I'm testing on. No blah tables anywhere, even though MyDatabase is the only one I have with write permission
If I execute the same "select * into ... openrowset ... bulk ... " statement in Microsoft SQL Server Management Studio, it succeeds (logged in as the same user used in python script)
I use functions in the same script to perform many other SQL-related tasks, successfully. The import is the only thing not working.
I've also run every negative unit test I could think of to make sure no variable is getting changed midway. Nothing.
I'm a beginner in python. I'm either doing something gravely wrong in my code, or ?
if "select * into .. openrowset etc" type of statement cannot be used to achieve my goal, what other SLQ solution can I use to create a table and load its data, based on its BCP dat and fmt files?
Thanks.
Automatic commit of transactions is disabled in the Connection object by default. All of your requested work is actually being done, the transaction is just being rolled back when the connection is closed.
Couple options:
Commit your changes with either the Connection.commit() or Cursor.commit(). They are functionally the same, Cursor.commit() was added so it's not necessary to keep up with the Connection object.
Set the autocommit variable to True when the connection is created. Note that it is an argument for the pyodbc.connect() function, it's not part of the connection string. Setting autocommit with your coding style would be:
....
cnxn = pyodbc.connect('DRIVER={SQL Server};SERVER=%s;DATABASE=%s;UID=%s;PWD=%s' % (server, catalog, login, pw),
autocommit=True)
....

Proftpd specific user configuration from MySQL

I have already set up a proftpd server with a MySQL connection.
Everything works fine.
I would like to set specific permissions for each user from the database using (PathAllowFilter, PathDenyFilter, ...)
The server running on Ubuntu 12.04 LTS distribution.
It is not so easy, there is no single module to do this. But I found a solution for this.
It's not optimal because you have to restart ProFTPd server each time you change MySQL configuration, but it works.
As you have a ProFTPd server that already run with MySQL, i will explain only the part of specific user configuration.
For this solution you need ProFTPd to be compiled with these modules:
mod_ifsession (with this module you will be able to configure <IfUser> conditions)
mod_conf_sql (with this module you will be able to load configuration from MySQL)
To help you with ProFTPd recompilation, you can run this command proftpd -V to see how your version is configured. You can found some documentation here.
Once you have compiled your ProFTPd server and it's run, you will have to log on your MySQL server.
If you read mod_conf_sql, they say to create 3 tables ftpctxt, ftpconf, ftpmap. We will not create these tables unless you want to have global configuration from MySQL.
We will fake the MySQL configuration with "views".
1. First you add each specific configuration as user column (make sure to have a default value):
ALTER TABLE ftpuser #
ADD PathDenyFilter VARCHAR( 255 ) NOT NULL DEFAULT '(\.ftp)|(\.hta)[a-z]+$';`
ALTER TABLE ftpuser
ADD PathAllowFilter VARCHAR( 255 ) NOT NULL DEFAULT '.*$';`
....
2. Create the conf view:
User's id and configuration column are concatenated to make an unique id
User's configuration column is used as type
User's configuration value is used as info
View is an union of selects (for every column an union is required)
CREATE VIEW ftpuser_conf AS SELECT concat(ftpuser.id,'-PathDenyFilter')
AS id,'PathDenyFilter' AS type,ftpuser.PathDenyFilter AS info from ftpuser
UNION
SELECT concat(ftpuser.id,'-PathAllowFilter')
AS id,'PathAllowFilter' AS type, ftpuser.PathAllowFilter AS info
from ftpuser;
3. Create the ctxt view
This view is a concatenation of a "Default" row and user's rows ("Default" row has 1 as id and user's rows have user's id + 1 as id.
Concatenate "userconf-" and user's id as name
"IfUser" as type
User's username as info
CREATE VIEW ftpuser_ctxt AS
SELECT 1 AS id,NULL AS parent_id, 'default' AS name, 'default' AS type, NULL AS info
UNION
SELECT (ftpuser.id + 1) AS id,1 AS parent_id,
concat('userconf-',ftpuser.userid) AS name,
'IfUser' AS type,ftpuser.userid AS info
FRON ftpuser;
4. Create the map view
User's id and configuration column are concatenated for conf_id
User's id + 1 for ctxt_id
View is an union of selects (for every column an union is required)
CREATE VIEW ftpuser_map
AS SELECT concat(ftpuser.id,'-PathDenyFilter')
AS conf_id,(ftpuser.id + 1) AS ctxt_id
from ftpuser
union
select concat(ftpuser.id,'-PathAllowFilter')
AS conf_id,(ftpuser.id + 1) AS ctxt_id
from ftpuser;
5. Add these lines to your ProFTPd configuration
<IfModule mod_conf_sql.c>
Include sql://user:password#host/db:database/ctxt:ftpuser_ctxt:id,parent_id,type,info/conf:ftpuser_conf:id,type,info/map:ftpuser_map:conf_id,ctxt_id/base_id=1
</IfModule>
Where:
user => your MySQL username
password => your MySQL password
host => your MySQL host
database => your MySQL database
6. Restart your ProFTPd server
I hope this will help you. Good luck

Copying large data from result of query in MS SQL Server Management Studio

I have a query that returns a large 'ntext' result. I want to copy this over to a plain text editor (Notepad), but only a part gets copied over.
I tried increasing Query Options -> Results -> Text, but the max seems 8192, which is insufficient for me.
Any ideas on how this can be achieved?
I'm using SQL Server Management Studio 2008, if that matters.
TIA!
Raj
The way I could get the entire data was using the option "Save Results as..." and then select TXT file, and then you can open it with a good editor like notepad++ , and you will have all the data.
Cheers =0)
try something like this:
--creates file on server
declare #cmd varchar(1000)
select #cmd = 'osql -U -P -S -Q"select * from yourtable" -o"c:\yourtextfile.txt" -w50000'
exec master..xp_cmdshell #cmd
or
--creates file on server
master..xp_cmdshell 'bcp your_table_or_view out c:\file.bcp -S -U -P -c '
or
--the limit of 8192 is per column, so split your column into multiple columns
--you will get a 1 character gap between these "columns" though
;WITH YourQuery AS
(
SELECT
col1
FROM ...
)
SELECT SUBSTRING(col1,1,8192), SUBSTRING(col1,8193,8192), SUBSTRING(col1,16385,8192) --...
Fast and dirty way
Right click table - 'Edit first 200 rows'
Click 'Show SQL pane'
Edit SQL to return required value
Click Execute SQL
Now you can copy big result
I've just copied 87K text this way.