OpenEdge ODBC for Access - ms-access

I cannot update the links in my access to a new ODBC driver.
I believe it to be that OpenEDge 10.1C is not doing a handshake with Access.
The dns will import external data into excel. The error is "error(-7748) the is no message for this error"

Most "weird" errors involving SQL and Progress are a result of the fact that Progress stores all data as variable length. Furthermore most of the data in most Progress databases is created, updated and manipulated by 4GL programs and those programs have no awareness of nor any sensitivity to SQL's ideas regarding column width.
Your first line of defense when you get a strange error trying to query a Progress database with SQL should be to run dbtool (on the db server) to fix any possible SQL width issues. Simply run dbtool (found in the Progress "bin" directory, $DLC/bin/dbtool if the OS is UNIX, use "proenv" to get a command prompt and then %DLC%\bin\dbtool if you are running Windows) and select option 2. You may want to script this and run it automatically if you frequently have issues.

Progress ODBC error -7748 can be solved by adding a registry entry. This discussion explains the workaround and what it does.
Essentially, the registry key:
HKEY_CURRENT_USER\Software\ODBC\ODBC.INI\[Your data source name]
should be a string value entry named 'Workarounds2', and its value should be set to 8192.
See:
http://media.datadirect.com/download/docs/odbc/allodbc/index.html#page/odbc/workaround-options.html

Related

MS Access handle ODBC fault at opening database

I have an MS Access frontend which is using a SQL database as backend. To connect, I use a ODBC connection and have created the needed entries in "ODBC Data Sources (32 Bit)"
When I will give the database to others, they will need to create this data source. I have a batch file for this so they can just run it.
If they do not run it, they will get a fault like "ODBC connection to XY failed". How could I change this error or at least write a second Message Box afterwards where I can tell them "run the Batch file XY to connect"?
When I will give the database to others, they will need to create this data source.
that is your first bad mistake and assumption. You MOST certainly do NOT want to deploy your applcation that way.
The way you deploy?
You take your accDB file, and link the tables to sql server. And you link using what is called DSN-less connections. Such connections do NOT require ANY data source to be setup on each work station.
So, ok, now you linked the tables to sql server (the production one - you probably were developing local on your developer PC and using a local copy of sql server express edition.
So, so now you link the tables to THEIR server, and then you now compile the accDB down to compiled executable version of Access - a accDE.
You are now free to deploy this "application" to any and all workstations for that company - and they do not have to re-link tables, do not have to setup a data source, and in fact they don't have to do anything, but simply run/launch the applcation.
How to make and get a dns-less connection?
Well, the MOST simple way is to ALWAYS, but ALWAYS ALWAYS create the linked tables using a FILE dsn. In fact, when you launch the ODBC connection manager from Access, the default is to use a FILE dsn. Never, and do not use "system" or "user" dsn.
If you link the tables using that FILE dsn, then Access converts them to DNS-less connections. At that point, you can even delete the DSN you created - access 100% ignores the DSN, and you don't need it anymore.
Next up:
If you been using nc 17 or nc 11-18 drivers? then yes, each workstation MUST have that same driver installed. Or, you can use the older "legacy" sql driver. But be careful - the legacy driver does not support datetime2 columns data types.
So, MAKE double, triple, quadrible sure that you not using data types and columns that require the newer drivers.
Right now, for some of the larger sites, we STILL use the older "legacy" driver, since that driver is installed at the OS level, and has been installed on every copy of windows going back to windows xp - in fact windows 98SE edition started shipping that driver. So you can 100% assume that the legacy sql driver is and will be installed on each workstation.
And by using + adopting dns-less connections, then no setup of the connection on each workstation is required. As long as those work stations are on the same network to sql server when YOU linked the tables, then they are good to go.
Now, on some sites, we actually can't even be on site, and we can't even pre-link to their sql server. So, what we do is on applcation startup, we check the current link for a linked table, and if it does not match a little external text file we ALSO included when setting up the work station, then we use VBA code to re-link the tables on startup. But once again, re-link of tables in VBA is easy, and again does NOT require ANY KIND of "dsn" or odbc setup on each work station.
And in fact, another way we used to do this is have a table in the front end, and it had one record, and that record had the connection string. So, right before deployment, we just edit that one record table in the front end to have the correct connection to their sql server. And once again, on startup , we check a linked table, and see if the connection strings match, if they don't, then we run the VBA re-link code, and once again, zero configuration and zero need exists to setup anything at all on each work station.
So, as a general rule, every dog, frog, insect that deployed access applications? We setup some re-link code and check that link on startup. In fact most developers have done this even without sql server - and even when using a access back end, then re-link code is included to resolve this issue.
but, be it linking to a access back end, or sql server back end? Some time of link check and system is assumed to have been cooked up by you, and this code will run on startup to check if the linked tables are pointing to the correct location.
But, be the back end oracle, sql server or whatever? You can create what is called dsn-less linked tables. As noted, you can use VBA to do this, or in fact you can use the linked table manager ---- as LONG AS you use a FILE dsn when you linked, then access coverts that to dns-less for you, and you be good to go.
so, in effect, you don't have to test/check/know for a odbc fail, since you checked the correct connection string on startup.
However, there is a way to trap, and check for a odbc failure, and this involves using a DIFFERENT way to connect, since we all know that if you have a odbc fail, you are duck soup (you have to exit Access, and there is NO KNOWN way around this issue - (well, except for testing if you can connect, and you do NOT use a linked table - since as noted once a odbc connect error triggers, it is game over.
The way you do this "alternate" test is like this:
Function TestLogin(strCon As String) As Boolean
On Error GoTo TestError
Dim dbs As DAO.Database
Dim qdf As DAO.QueryDef
Set dbs = CurrentDb()
Set qdf = dbs.CreateQueryDef("")
qdf.connect = strCon
qdf.ReturnsRecords = False
'Any VALID SQL statement that runs on server will work below.
' this does assume user has enough rights to query built in
' system tables
qdf.sql = "SELECT 1 "
qdf.Execute
TestLogin = True
Exit Function
TestError:
TestLogin = False
Exit Function
End Function
So, above does not hit or use a linked table. And a HUGE bonus of above? Once you execute that above valid logon, then any and all linked tables will work - AND WILL WORK without even having included the user/password in that connection string for the linked table. This is in have a huge bonus in terms of security, since now you don't have to include the user/password in the linked tables which of course exposes the user/password in plane text for all users that could look and see and find the sql user/password used.
In fact, what this means is that you can link your tables, but NOT include the user/password when you link the tables!!! - this is a HUGE security hole plugged and fixed when you do this.
So, once a valid logon has occurred (such as above), then any and all linked tables will now work, and work without even having included the user/passwords in those linked table connection strings.
As noted, the other big bonus is that you can use the above code to test for a valid connection, and avoid that dreaded "odbc" error, since as noted, if a odbc connection error is EVER triggered at any point in time, you MUST exit the applcation - no other way out.
However, it should be noted, that if you ever are going to use a wi-fi connection, or say cloud based sql server say running on Azure?
In that case, often with wi-fi, or a cloud based edition of sql server, then of course such connections over the internet are prone to minor and frequent dis-connects.
ODBC was developed long before the internet, and long before people would do things like connect access to some cloud based sql server running, and using a connection over the internet. But, if this turns out to be your use case, and deployment case?
Then you have to bite the bullet and ASSUME and ENSURE that you now adopt the nc 11-18 drivers. (I would go with nc17). These new drivers are now "internet" aware, and they are able to gracefully handle minor dis-conects, and in fact automatic re-cover and re-connect.
So, if you are ever going to use wi-fi, or connect to cloud based server? Then yes, you have to link the tables using say nc17 newer drivers, and you ALSO MUST THEN ensure that the same driver version you linked tables with is to be installed on each work station. You still don't have to setup any dsn connection and all that jazz - but you do have to ensure that the driver you used is ALSO installed on those work stations.
As noted, for larger deployments, we thus use the standard "legacy" sql driver, as it would be too painful to go to all work stations and install this driver.
However, we had one location, and for months they were experience in odbc connection failure. We had them replace a router, and even a network card on a server - but the problem still remained.
We suspect that some workstations had aggressive power management turned on, newer windows 10-11 will often put the network card to sleep, and thus when using access, we were seeing odbc errors. So, for that company, we had them install nc17, and linked access to sql server using that driver, and the problem went away (because those newer drivers now have built in re-connect ability - this is a relative new feature of ODBC, and one that legacy systems and drivers don't have.

Successful ODBC connection, but can't load the table in Runtime

Update: I ran a SQL connection test function in my VBA and it reads open from my main workstation, I ran the same connection script at the client workstation and Runtime crashed. I also checked to make sure it was using a 32-bit rather than 62-bit driver and that the credentials were identical to another station that does work.If I run a connection test Runtime crashes, if I try to only run the query it says the connection can't be found. Both attempted from the user computer.
I successfully opened an ODBC Unicode connection to the MySQL database from the client computer at System DSN level. When I open the Access Runtime file and try to use a form to query the database I receive the error: "ODBC--connection to 'servername' failed." I have tried numerous names including changing the case sensitivity. I verified that the TCPIP address I used was indeed to the host for this database, and that the name using ipconfig /all - was of the appropriate case sensitivity. I have not been able to figure out if it is an issue with Access vs. Runtime, but I can't really see that being the problem here. The name of the table is "tbl_panel," and it is definitely within the database I connected to with ODBC with that exact name. The user requirements for the connection I used has basically "Read-Only" privilege, but that is all it should need as I am only checking the data. Unless creating a recordset is beyond the scope of SELECT, SHOW VIEW, CREATE TEMP TABLES. Furthermore, the fact that it can't find the server itself tells me its probably not to do with my SQL/VBA coding. Hovering over the tbl_panel in the Navigation Pane of Access shows "ODBC;DSN='servername';;TABLE=tbl_panel".
Here is the SQL string for creating the record set (truncated for space since the statement itself works fine):
stSQL1 = "SELECT tbl_panel.PNL_SN_ID FROM tbl_panel " & _ etc.
Set qryList = dbsInspect.OpenRecordset(stSQL1)
This has me pretty stumped, and I am a rookie when it comes to ODBC, so if it is something obvious please be kind. I did do a lot of searching, but most ODBC queries return issues with the initial setting up of the Data Source, or opening the connection in code. Is that a possibly for what I have to do? Include an opening statement for the table in VBA so that Runtime knows what to do? I'm going to feel silly if that is what the problem likely is and I typed all this for nothing.
This ended up being a bit issue. Despite using the ODBC connection tool in Control Panel --> Administrative Tools. I needed a 32 bit connection. This is answered in another SO question.
HOWEVER, I will note that the only way I ended up getting an useful error message was by creating a backend. Then it displayed the error message in the linked question below:
Related Question on SO

Timeout issue during data transfer from MySQL to SQL Server using SSIS

I am trying to transfer 67,714,854 rows from MySQL to SQL Server using SSIS. The package times out after transferring 14,282,990 rows. I changed the time out property to 0 also, but that didn't help.
How do I resolve this issue?
I found a hacky solution to it. And that is having a limit at the end of your query. I was facing the same problem with ADO .NET connection to connect to MySQL. Although it doesn't solve the problem. It atleast get the work done.
SSIS: 2208 R2.
MySQL: 5.0
On your OLE DB Destination connection, what "Data access mode" have you selected. If you have selected "Table or view - fast load" (this is the default), then there will be a "Maximum insert commit size" specified. You can try one of two things: 1) change the commit size to a larger number; or 2) try the other data access mode "Table or vew". Since you're getting a timeout error, I suspect that option 1 may not help (since you're already getting a timeout with a smaller value), so try option 2. Although that will likely result in slower performance, it may actually complete successfully. (You could then try #Siva's approach and split the output across multiple destinations to improve performance).
(Note: I'm referring to what's available in SQL Server 2008 R2, if you're using previous versions, it may be slightly different)
If none of the above work, you could also try to create a new SSIS package from scratch by running the SQL Server Import Wizard (right-click on your database in SQL Server Management Studio and select Tasks/Import Data. Follow the wizard screens and near the end make sure you check the box to Save the SSIS package, and choose a file location to save it to. Typically, the resulting SSIS package will be a functional package (and then you can also make whatever further modifications you like to it).
Does MySQL give you the error or are you using PHP (or another language) to transfer the data and does that timeout? In the case of the latter, in PHP you can set the script timeout to infinite using this:
set_time_limit(0);
Either way, based on the information given, I'm not sure what type of database it is, but typically I would set up a cron script to transfer the data bit by bit in order to keep the load at an acceptable level. Please give more information...

Deadlock on logging variable value changes using a SQL task

Morning
I've been reading "SQL Server 2008 Integration Services Problem - Design - Solution". It outlines a way of logging variable changes which I'm trying to replicate in SQL 2005.
Create variables e.g. PackageId, RecordsAffected. - Set Raise ChangeEvent to true.
Create a string variable g.g. strVariableValue. - Set Raise ChangeEvent to false.
On the package event handler: OnVariableValueChanged add a script task "SCR Convert value to string".
Add ReadOnlyVariables: System::VariableValue
Add ReadWriteVariables: User::strVariableValue
In the script, set a local variable to System::VariableValue.value.tostring
Set the variable User::strVariableValue to the local variable
Add an "Execute SQL Task" component "SQL Log Variable Value Changed" calling a SP with no resultsets.
Set parameter mapping to User::PackageId, System::VariableName, User::strVariableValue
When this is run, I get a deadlock on User::PackageID
Error: 0xC001405B at SQL Log Variable Value Changed: A deadlock was detected while trying to lock variable "User::_PackageID" for read access. A lock could not be acquired after 16 attempts and timed out.
The script step succeeds but the Execute SQL task fails. I'm using Visual Studio 2005 Version 8.0.50727.42, Microsoft SQL Server Integration Services Designer Version 9.00.4035.00 and BIDSHelper Version 1.4.3.0.
Any ideas?
Eureka!
I had the same problem and led to a few deadend posts, then I discovered the root.
I had the framework working just fine and wanted to force some info to be logged.
So I changed the value of the framework variable "strVariableValue" and this caused the deadlock with the change event task.
I fixed by creating my own variable "strLogMe" and putting whatever I wanted to log.
Moral: don't touch the framework variables
Did you use the code sample from the book? All the files are available on the Wiley website for free. The code sample includes a SSIS package, sql scripts, and VB code for the script. If this doesn't work for you, then let me know since one of my team members found a way to log variable changes that was different from this methodology.
I was getting this error ("a deadlock was detected" etc), suddenly, which seemed to coincide with I.T. having done a Microsoft Windows patch on the server. There were packages which were using script tasks, with read-only and/or read-write variables in the SSIS UI. Even though it seemed to have been an environmental issue (because the packages had worked for months, then suddenly stopped working, even though I hadn't changed any code), I thought, well (as I had seen from various blog posts from years gone by), there were instances of companies doing server patches, then having their SSIS packages break; and the blogs seemed to say, change the way you're locking the variables, don't reference them in the UI; instead, lock them explicitly in code. So I tried the same thing. It didn't fix it.
It turns out some individual had removed the permissions of the user under whose identity the packages run, from the AD group; those permissions were required because it was trying to copy a file from a directory which required read permissions on the directory. These packages are typically called by a SQL agent job using a proxy identity. When the package was executed manually from SSMS, it worked. But when it was run by calling the SQL agent job, it failed.
The bottom line is, it was just coincidence that the packages started failing around the time of the Windows update. But the other (main) point is, if your package is trying to access a file on the network, and the identity (or proxy identity) under which that package runs does not have permissions to the source or target directory, then your package could fail and the problem could manifest itself in this cryptic way, where it looks like a variable deadlock issue, but it's actually a file share permissions issue. I only wasted a day on this, but... maybe this will be useful to somebody in the future.

How to fix SSIS : "Value, does not fall within expected range"?

When I open up the solution that contains SSIS packages created by a colleague, I get this awkward error that tells me nothing about what I'm supposed to do to fix it.
He left instructions to take all the "variables" out of the connection string in the dtsx file manually before opening up the solution. I have done that, now when try to view the package in the designer I just get an image of a red x and this message.
EDIT: You cannot see any design elements, no tabs across the top to switch to errors or data flows. Just a gray center area on the screen with a red x, and the message, its like VisualStudio dies in the process of reading the dtsx file.
The question is rather unspecific so it’s of course difficult to get on the right track here. All of the given answers focus different issues. I would say that PeterX had the best guess. The reason for the error could be as simple as a modified data source.
I came across with a bug "error output has no corresponding output" quite often when adding a new column to a table that needs to be processed by an existing SSIS package. This bug came along with an error message saying that a "Value does not fall within the expected range".
A newly added column needed to be processed by an existing SSIS Package. The expected behavior is that SSIS will recognize that there is a new column and select this column on the columns page of the OLEDB Source Task SSIS to be processed. However, when opening the OLEDB Source Task for the first time after having modified the table I got twice the following error message: "Value does not fall within the expected range." The error message showed up when opening the editor and when opening the Columns page of the editor. Within the Advanced Editor of the OLEDB Source Task the new column showed up in the OLEDB Source Output Columns Tree, but not in the OLEDB Source Error Output Columns Tree. This is the actual underlying problem of the error message. Unfortunately, there seems to be no way to add the missing column manually.
To solve the problem, remove and re-add the newly added column on the Columns Page of the normal Editor as mentioned by Jeff.
It is worth to be mentioned that the data source of the OLEDB Source task was a modified MDS View. Microsoft CRM Dynamics – as mentioned in the related thread – is using views, too. That leads me to the conclusion, that using views as a data source may produce either of the above mentioned errors, when modifying datatypes or adding/removing columns.
Related Thread: Error" ...The OLE DB Source.Outputs[OLE DB Source Output].Columns[XXXXXXXX] on the non-error output has no corresponding output
The described workaround refers to Visual Studio 2008 Version 9.0.30729.4462 QFE with Mircorsoft.NET Framework 3.5 SP1. The database is SQL Server 2008 R2 (SP2).
I had to delete and recreate the OLE DB Data source in my Data Flow - this is where I got the error. I also noted I had to "re-select" the "OLE DB connection manager" in the drop-down-list to force it to recognise the new connection.
This was probably a combination of getting the solution from TFS (where I noticed the data-sources didn't come-across properly and it complaining about a missing connection GUID) and/or copying and pasting the elements from another package.
(For BIDS 2008).
I had this issue for my OLE DB Source component with an SQL command after adding new columns to the database, and it wouldn't let me select columns or anything else to add the new columns.
I'm working with an Oracle database, and the only way I could get it to update was to change the SQL query to select 1 from dual, and preview it. Then revert it back to my old query.
You get a similar message if someone uses EncryptAllWithUserKey as the ProtectionLevel. However, I believe the message is slightly different (even though you get a grey design surface with a red X).
Have you tried viewing the file in Notepad? Is it just a series of GUIDs or is there anything in it that is humanly readable? If it doesn't have any readable code, then it was probably encyrpted with the user key.
If the employee deployed the packages to a server and used SQL Server as the deployment destination (not File System or SSIS Pacakge Store) then you can download the packages to your machine. Just connect to the SQL Server Integration Services engine, expand Stored Packages, expand MSDB, expand the relevant folder, right-click on the package, and click Export Package. Save the file on your local machine and open it. The package will probably lose annotations and pretty formatting, but otherwise it should be identical to what the employee deployed.
I just struck the same issue. After flailing about for a bit, I found the solution was to edit the Solution Configuration.
The Solution Configuration appeared to have a matching Project configuration, as shown:
However clicking the drop-down arrow for that Project (SSIS-Advance in this example) revealed that there was no Project Configuration for that project called Production - Sub Reports. I'm not sure how that came about - this Solution has a 7-year history and many developers.
Anyway once I created a New Project configuration (using that same drop-down menu), it is all happy now.
If it has Oracle data sources, you may need to install the Microsoft Connectors v4.0 for Oracle by Attunity:
https://www.microsoft.com/en-us/download/details.aspx?id=52950
I also had to use VS 2015 - the version originally used to create the project and package.
I had this exact problem and installing these connectors and using VS 2015 fixed the issue.
I had this occur as well when I tried to call a stored procedure with OUTPUT parameters with OLE DB.
I found this: http://sqlsolutions.blogspot.com/2013/04/ssis-value-does-not-fall-within.html, which resolved my issue. The relevant action was to rename the SSIS parameter mappings to '0', '1', etc.
So for example, when calling dbo.StoredProc #variable0 = ?, #variable1 = ? OUTPUT, #variable2 = ?;, in the parameter mapping dialog, you would name the parameters '0', '1', 2' to correspond to those. Ah, SSIS <3
I get this when I do not follow the convention for parameter naming, e.g. not name parameters 0,1,2,... in the right order for OLE DB connections.
The details are documented here.
In your connection manager, convert your connections to package level instead of project level
Delete connection manager and re-create and setup ssis package solve the problem.
I got this issue after I Add Existing Connection Manager in a SSIS project. I was just importing a Project Connection Manager from a different project (.conmgr) to my project. My solution to fix the issue was:
Deleting the imported .conmgr
Recreating it from scratch