Use Powershell to create access 2007 Queries? - ms-access

I have been following Richard Siddaway's Awesome Series on Powershell+Access2007.
Unfortunately it ends before discussing creating/running/modifying access 2007 queries in powershell. How could this be done?

The cited series of articles uses a definition of stored procedure that is problematic. It says:
An SP is a piece of code that we have
defined, and saved in the database".
While this may be correct in a metaphorical sort of way, it's incorrect for Access/Jet/ACE. There is no CODE in the objects in a Jet/ACE database that are referred to by the generic term "procedure. In Access/Jet/ACE, a "procedure" is just a stored QueryDef, as there is no procedural code allowed. I don't know if the OLEDB interface restricts it or not, but my guess is that PROCEDURE means DML query and VIEW means SELECT.
So (and I'm just guessing here -- I'm an Access developer so have no need for doing any of this externally), if you want to create/update a DML QueryDef, you'd use the PROCEDURE keyword and the relevant DML for creating/altering PROCEDUREs. Likewise, with SELECTs, you'd use VIEW (I'm assuming).

Related

Is there any replacement for the InstrRev function in Microsoft Access 2016 for a calculated column?

I am attempting to create several calculated columns in a table with different parts of a parsed filename. Using the InstrRev function is critical to isolate the base file name or extension, but InstrRev is not supported in calculated columns.
I know that there are other ways to solve my problem that don't use calculated columns, but does anyone have a valid calculated column formula that could help me?
Access lets you use VBA functions (including user-defined functions) directly from within a SQL query - however they only work within an Access context - if you have another frontend for a JET (now ACE) database - or inside a computed/calculated column, they won't work - as you've just discovered.
Unfortunately Access (JET and ACE) have only a very meagre and anaemic selection of built-in functions, and the platform has now lagged-behind SQL Server (and even the open-source SQLite) significantly - Access 2016 has not made significant changes to its SQL implementation since Access 2000 (16 years of stagnation!) whereas SQL Server 2016's T-SQL language is so evolved it's almost unrecognizable compared to SQL Server 2000.
JET and ACE support the standard ODBC functions ( https://msdn.microsoft.com/en-us/library/bb208907(v=office.12).aspx ) however none of these perform a "reverse index-of" operation. Also absent is any form of pattern-matching function - though the LIKE operator works, it only returns a boolean result, not a character index.
In short: what you want to do is impossible.
This has been discovered by many people before you:
https://social.msdn.microsoft.com/Forums/office/en-US/6cf82b1b-8e74-4ac8-9997-61cad8bb9310/access-database-engine-incompatible-with-instrrev?forum=accessdev
He maintains a list of DAO/Jet/etc reserved words - and on that list you will see the InstrRev is a VBA() function, and is not a part of the Jet/Ace Engines.
using InStrRev() and similar functions in Jet/ACE queries outside of Access
As you have discovered, SQL queries executed from within Access can use many VBA functions that are not natively supported by the Jet/ACE dialect of SQL
That said, computed/calculated columns are only really of use in stored VIEW objects ("Queries" objects in Access parlance) - which in turn are used for user convenience, not for any programming advantage - especially as these are scalar functions that are evaluated for every row of data that the engine processes (making them potentially very expensive and inefficient to run).
...so the only real solution is to abandon computed/calculated columns and perform this processing in your own application code - but the advantage is that your program will likely be significantly faster.
...or don't use Access and switch to a different DBMS with better active support, such as SQLite (for an in-process database), SQL Server (now with LocalDb for in-process support), or VistaDB (proprietary, but 100% Managed code). Note that Access also supports acting as a front-end for a SQL Server "backend" data-store - where you could create a VIEW that performs this operation, then query the view from your Access code or other consuming client.
There is a workaround if you must: Create a duplicate column that contains the string-reversed value of your original column, then you can evaluate the ODBC LOCATE or JET SQL InStr functions on it and get the result you want (albiet, reversed) - but this would require double the storage space.
e.g.
RowId, FileName , FileNameRev
1 , 'Foo.txt', 'txt.ooF'
2 , 'Bar.txt', 'txt.raB'
Avoid any calculated field. It's a "super user" feature only, that will cause you nothing but trouble. Calculated fields - or expressions - belong in a query.
So create a simple select query:
Select
*,
InStrRev([FieldToCheck], "YourMatchingString") As StringMatch
From
YourTable
Save the query, and then use this whenever you need the table values and this expression.

mySQL: Stored procedures are more secure than queries?

I have a website using mySQL database and I want to do common tasks like add users, modify their info, etc. I can do it perfectly with regular queries. Im using prepared statements to increment security.
Should I use stored procedures to increment the security or the results will be the same? I though that may be using stored procedures I can restrict the direct interaction that a possible attacker could have with the real query. I'm wrong?
I guess it would depend on what language youre using. Using a prepared statement with a sql string that contains all of the sql to be executed, or using a prepared statement with a sql string that executes a stored procedure are going to be about equivalent in most languages. The language should take care of the security around the prepared statement. C# for example will validate the input, so sql injection vulnerabilities are greatly reduced unless your prepared statement is written so poorly that feeding it bad (but expected, ie, 1 vs 0) variables will dramatically change the result set. Other languages may not provide the same level of validation though, so there may be an advantage depending on exactly what your stored proc looks like.
Using a stored procedure is better for maintainability, but there are not many scenarios where its going to provide any sort of change in security level, assuming the program is properly designed to begin with. The only example i can think of off the top of my head would be a stored procedure that takes raw sql strings from user input, and then executes that sql against the db. This is actually less secure than using a prepared statement unless you went to great lengths to validate the acceptable input, in which case you better have a really good reason for using such a stored proc in the first place.
Basically, what I'm saying boils down to the fact that you're going to need to read the documentation for your language about prepared statements, and determine what vulnerabilities, if any, using prepared statements may have, and whether or not those can be eliminated in your specific scenario by switching to a prepared statement that calls out a stored procedure instead of executing a sql query directly.
The results would be the same (assuming that you set your stored procedure up right).
there appears to be a pretty good write up on it here. Though I would never suggest you try to escape user input yourself. (They mention this as option 3)

SQL Server Object Dependencies

Red Gate has some pretty good tools, but I don't think that their Dependency Tracker shows how Tables are effected by the stored procedures that touch them.
Is there any tool that can scan a database and determine what processes INSERT, UPDATE or DELETE records from the table as opposed to just touching\being dependent on them? Seems like this shuld exist by now...
No, dependency tracking still isn't perfect. The reason is that procedures can reference tables by dynamic SQL, dependencies can be broken if objects are dropped and re-created (I've written about how dependencies can break here). The best "first sweep" I have come to rely on is:
SELECT OBJECT_NAME([object_id])
FROM sys.sql_modules
WHERE LOWER(definition) LIKE '%table_name%';
Again, this won't find objects that build statements using dynamic SQL, and it can produce false positives because table_name could be simplistic and part of other object or parameter names, or included only in comments or commented-out code.
You can also check for plans that reference a table using sys.dm_exec_cached_plans and related DMFs/DMVs but note that this won't find any plans that have rolled out of the cache.
Using SQL Search, you can search for the column name and find all the stored procedures where it is used.
It's a Third Party tool and that is Red Gate SQL Search
Features
Find fragments of SQL text within stored procedures, functions, views
and more
Quickly navigate to objects wherever they happen to be on a server
Find all references to an object
Hope this will help you.

SQL Server sp_help : how to limit the number of output windows

Normally successful execution of
sp_help [object_name]
in SQL Server returns a total of 7 output windows with various results out of which normally I am interested in only 2 windows namely the one with all the column information and the one with the constraints.
Is there a way I can tell SQLserver to only display these while formulating the command?
Short answer: no, you can't do this directly because the procedure is written to return that data, and TSQL has no mechanism for accessing specific result sets.
Long answer: but you can easily get the same information from other procedures or directly from the system catalog:
sp_columns, sp_helpconstraint (this is actually called by sp_help) etc.
sys.columns, sys.objects etc.
There's also the option of copying the source code from sp_help and using it as the basis of a new procedure that you create yourself, although personally I would just write it myself from scratch. If you do decide to write your own stored proc, you might find this question relevant too.

Calling T-SQL stored procedure from CLR stored procedure

Very brief background:
We are making use of CLR stored procedures to apply access control, using Active Directory, on query results to restrict what the end user can see accordingly. In a nutshell, this is done by removing rows from a datatable where the user does not satisfy the criteria for access to the result (document in this case).
This filtering was previously done on the client before displaying the results. SQL 2008 and a much more powerful server is the motivation for moving this access filtering off the client.
What I am wondering is, is there any performance benefit to be had from calling the original regular T-SQL stored procedure from the CLR stored procedure equivalent, instead of having 'inline' T-SQL passed into the comand object (which in this case is just the original T-SQL that was made a stored procedure) ? I cannot find anywhere where someone has mentioned this (in part probably because it would be very confusing as an example of CLR SPs, I guess :-) ).
It seems to me that you might, as the T-SQL stored proc has already been optimised and compiled ?
Is anyone able to confirm this for me ?
Hope I've been clear enough. Thanks very much,
Colm.
If your SQL CLR stored procedure does a specific query properly (nicely parametrized) and executes it fairly frequently, then that T-SQL query will be just run once through the whole "determine the optimal execution plan" sequence and then stored in the plan cache of your SQL Server (and not evicted from it any faster than a similar T-SQL stored procedure).
As such, it will be just as "pre-compiled" as your original T-SQL stored procedure. From that point of view, I don't see any benefit.
If you could tweak your SQL statement from within your SQL CLR procedure in such a way that it would actually not even include those rows into the result set that you'll toss out in the end anyway, then your SQL-CLR stored procedure executing a properly parametrized T-SQL query might even be a bit faster than having a standard T-SQL stored procedure return too much data from which you need to exclude some rows again.