What’s the best practice for integrating SQL Server with Active Directory (AD)?
NB. I’m using SQL Server 2016
Crux of the issue: I'm using SSRS 2016 and have several reports that need to be filtered based on the user accessing the reports. Originally I created a table of users that would need to access the reports. Then in the report builder I passed the UserID as a parameter within the query so that the resulting dataset would be limited to the data the user needed to see.
The problem this created is that the User table would have to be maintained, and Active Directories are dynamic. Now that I have some time to develop a better option, I’d like to link the LDAP data with SQL Server.
I’m wondering what the best practice for doing this is.
One way I pursued this was through an SSIS package ADO.Net connection. Then convert the data. Then load it into a table. Then schedule a job to run the package however often I needed it. This was problematic because for whatever reason I couldn’t get the data conversion process to work.
The second way I’ve been approaching this is to create a linked server instance for the AD. My research has indicated that I’ll need to create a function that overcomes the string limitation of the xp_sprintf Function. Then leverage temp tables and loop through LDAP data to get around the 1000 record limitation from the AD. I've been able to accomplish all this.
At this point though, there appears to be some other issues.
This ultimately increases the code necessary in the views for my reports which may make it harder for other database users to update if & when the time comes. To the point that I'd need to abandon the views and create stored procedures for the reports to pull from.
This also increases transaction counts beyond the SQL Server to include LDAP every time a user accesses a report.
So to resolve that I could wrap the original query of the LDAP data to create a table and then create a job to run that stored procedure every so often.
Either option solves the problem of maintaining the users table which is good, but it isn't perfect because AD changes can take place at any time.
Which option is better here?
If the SSIS package is the better route, I’m curious as to why that is the better route. I’m not opposed to going back and figuring out what it is I’m missing on the SSIS package to make it work.
Are there additional options I should consider if I want to get the most up-to-date Active Directory listing?
Thanks.
Related
I have an MS Access 2016 application that a few people use in one department. I know this whole thing has web dev written all over it but this access database has been their process for a while and there is no time right now to switch over.
Recently, a different department wants to use this application, but having their own copy. Currently, if I need to make changes, I'll make the changes in a copy of the app, they send me a current version when I'm ready to import their data, I import it and send them back a new one. However, currently I copy the data table by table and past it into the new database. This is inefficient and tedious, and now with 2 sets of data I'd be doing this for, that's crazy. There's over 20 tables so I don't want to have to manually copy over 40+ tables across the 2 apps for even the smallest change like altering a message to the user.
I know I can copy the code so I can avoid importing the data, but sometimes for big changes I'll change over 15-20 vba files.
So, a couple questions:
1.Is there a way to generate insert statements for the entire database that I could run in a script? So when I create the new copy I just upload 1 file and it populates all the data?
2.Are there any kind of dev tools that will help this process? Right now I'm thinking that it's just a downfall of creating an MS Access app, but there must be some way that people have made the "new release" process easier. My current system seems flawed and I'm looking to have a more stable process.
EDIT:
Currently I have all my data stored locally, attached to the same access file as the front end. Since I will have 2 different departments using the same functionality, how do I manage the data/the front-end? These 2 departments should have their own access file to enter data using the forms, so having 1 front end between the 2 departments won't work.
Also, should I create 2 separate back-ends? Currently I would have nothing to distinguish what is being inserted/changed/deleted from one department from the other. If I were to attach a field specifying who entered the record, that would require a complete overall of all my queries which I don't have the time for as there are deadlines I need to meet.
First thing is to split the database. There is a wizard for this.
Then you can maintain the frontend without touching the real data.
Next, consider using a script to distribute revised versions of the frontend. I once wrote an article on one proven method to handle this:
Deploy and update a Microsoft Access application in a Citrix environment
i have 2 questions when i use access:
i create a form with comboBox and calenders, i want to choose an employee
from combobox and from date and to date and when i click ok i will send these
parameters to a query to return the result in a query (result is the calculation
of it's salary).
i know how to release an access project to be useful to user that can't
access tables and queries only forms.
is there any way to change the access project from release mode to development
one, because supposed that an error occurred, how to solve it without loosing
my data.
Note: i don't have client/server i develop a program and i release it and
give this release to the user, after a specific time this user tell me that
an error occurred, and he need data inserted from this program to database.
i can solve this problems and release another version of program, but the
main problem is how to take all data from the old program to the new one.
-- You can reference form control in a query:
SELECT FROM MyTable
WHERE EmployeeID = Forms!MyForm!cboEmployee
AND SomeDate BETWEEN Forms!MyForm!txtDateStart And Forms!MyForm!txtDateEnd
You could also build an SQL string and use it as the record source for a form or in VBA.
-- Access should be split into front-end (forms, reports, etc) and back-end (data). When you make changes to the front-end, you create a new mde or accde and send that to the users. The data stays on a server in the back-end.
See: http://msdn.microsoft.com/en-us/library/aa167840(v=office.11).aspx
EDIT
From your comments, it seems that each application has a single user, if this is the case, splitting is not essential, but it can still be a good idea. The user will get two databases, one for data and one for forms etc and only the one for forms gets replaced. You will need to include a routine to locate and link the back-end tables.
However, if this is not possible, an mde or accde does not hide the data, you can send your revised copy and include a routine to import from the previous mde/accde.
EDIT 2
There are wizards that will split your database for you and link the tables. Where you find them varies slightly from version to version, but they are under the menu item Database Tools. The only problem with this is that the linked table holds the location for the back-end, which is on your computer, not on your users computer. Linked tables are how you access data in the second database. These act as if there are tables in the first database, except you cannot change them. Unfortunately, linked tables hold the location of the back-end, so this will have to be changed if you are sending it to a users. You can either write code, or show your user how to use the linked table manager. This may lead to confusion and may not be worth the effort for one PC. (See also http://www.alvechurchdata.co.uk/accsplit.htm)
Alternatively, you can split the database on your PC and make all the changes to forms etc that you want, then add some code that will import the tables and other data for the user into your new copy. The user will follow the instructions in your code to import the tables. As an aside, you will find that development is a lot safer on a split database. You should also decompile from time to time, which you can find at http://www.granite.ab.ca/access/decompile.htm.
If you want to protect your code, you can create a compiled version of this new copy, the extension for a compiled Access database is *.accde, for 2007 onward and *.mde for prior versions. This is what I thought you meant by 'i know how to release an access project'.
We are currently having an OLTP sql server 2005 database for our project. We are planning to build a separate reporting database(de-normalized) so that we can take the load off from our OLTP DB. I'm not quite sure which is the best approach to sync these databases. We are not looking for a real-time system though. Is SSIS a good option? I'm completely new to SSIS, so not sure about the feasibility. Kindly provide your inputs.
Everyone has there own opinion of SSIS. But I have used it for years for datamarts and my current environment which is a full BI installation. I personally love its capabilities to move data and it still is holding the world record for moving 1.13 terabytes in under 30 minutes.
As for setup we use log shipping from our transactional DB to populate a 2nd box. Then use SSIS to de-normalize and warehouse the data. The community for SSIS is also very large and there are tons of free training and helpful resources online.
We build our data warehouse using SSIS from which we run reports. Its a big learning curve and the errors it throws aren't particularly useful, and it helps to be good at SQL, rather than treating it as a 'row by row transfer' - what I mean is you should be creating set based queries in sql command tasks rather than using lots of SSIS component and dataflow tasks.
Understand that every warehouse is difference and you need to decide how to do it best. This link may give you some good idea's.
How we implement ours (we have a postgres backend and use PGNP provider, and making use of linked servers could make your life easier ):
First of all you need to have a time-stamp column in each table so you can when it was last changed.
Then write a query that selects the data that has changed since you last ran the package (using an audit table would help) and get that data into a staging table. We run this as a dataflow task as (using postgres) we don't have any other choice, although you may be able to make use of a normal reference to another database (dbname.schemaname.tablename or somthing like that) or use a linked server query. Either way the idea is the same. You end up with data that has change since your query.
We then update (based on id) the data that already exists then insert the new data (by left joining the table to find out what doesn't already exist in the current warehouse).
So now we have one denormalised table that show in this case jobs per day. From this we calculate other tables based on aggregated values from this one.
Hope that helps, here are some good links that I found useful:
Choosing .Net or SSIS
SSIS Talk
Package Configurations
Improving the Performance of the Data Flow
Trnsformations
Custom Logging / Good Blog
I've perused the threads here on migration from SQL 2000 to SQL 2008 but haven't really run into my question, so here we go with another one.
I'm building a strategy to move specific SQL 2000 databases to a new SQL 2008 R2 instance. My question comes with regards to the best method for transferring the schema and data. One way I know of is to do the quick 'n' dirty detach - copy - attach method, which should work so long as I've done my homework wrt compatibility and code and such.
What if, though, I wrote the schema and logins via script and then copied the data via SSIS? I'm thinking of trying that so I can more easily integrate some of my test cases into the package (error handling and whatnot). What would I be setting myself up for if I did this?
Since you are moving the data between servers or instances, I would recommend moving the data via data flows. If you don't expect to run the code more than once, then you can let the wizard generate your code for this move. However, when I did this once 2+ years ago, the wizard code generated combined execute sql tasks that combined many "create table" commands into one task and created a few data flow tasks that had multiple source and destinations in them to insert data in the destination. This was good to get up and running, but it was inadequate when I wanted to refresh the tables one more time after I modified the schema of the new target tables. If you expect to run the refresh more than once, then you may want to take the time to create the target schema first and then manually create the data flows.
Once you have moved the data, then you can enable full-text search on the new server. I don't believe you will need to have this enabled on your first load.
One reason I recommend against the detach-attach method for migration is that you bring all the dirty laundry from the 2000 database to the 2008 R2 database. If you had too lax security on the 2000 server or many ancient users that shouldn't exist, it could be easier to clean this up by starting from scratch. If you use the detach-attach method, then you have to worry about users.
For some security issues I'm in an envorinment where third party apps can't access my DB. For this reason I should have some service/tool/script (dunno what yet... i'm open to the best option, still reading to see what I'm gonna do...)
which enables me to generate on a regular basis(daily, weekly, monthly) some csv file with all new/modified records for a certain application.
I should be able to automate this process and also export at any time a new file.
So it should keep track for each application which records he still needs.
Each application will need some data in some other format (csv/xls/sql), also some fields will be needed for some application and some aren't... It should be fairly flexible...
What is the best option for me? Creating some custom tables for each application? Based on that extracting modified data?
I think you best thing here, assuming you have access to the server to let you set this up is to make a small command line program that can do the relativley simple task you need. Languages like pearl are good for this sort of thing I do believe.
once you have that 'tool' made you can schedule it through the OS of the server to run ever set amount of time. Either schedule task for a windows server or a cronjob for a linux server.
You can also (with out having to set up the scheduled task if you don't / can't want to) enable this small command line application to be called via 'CGI' this is a special way of letting applications on the server be executed at will by a web user. If you do enable this though, I suggest you add some sort of locking system so that it can only be run every so often and to stop it being run five times at once.
EDIT
You might also want to just look into database replication or adding read only users. This saves a hole lot of arseing around. Try to find a solution that dose not split or duplicate data. You can set up users to only be able to access certain parts of the database system in certain ways, such as SELECT data