I have created a passthrough query from an sql server database to display data into an access database. What i am wanting now is for this information to then update another table. which has other information on it other information on it imported from another passthrough query.
Think of your passthroughs as read only queries. You will not be able to do any record manipulation - only return data.
You would want a separate query that does your updates. You may even have to write the resulting dataset from the passthrough to a temp table and use that in your update query.
In a simplified explanation of passthroughs, imagine I have two linked tables with 10000 records each and I link them for a query that returns 5 records. Access needs to pull 20000 records (all from each table) across the network in order to compare them and give you 5 results. In a passthrough, it does the comparison on the other end and only brings 5 records across.
Related
I am not database engineering. But I have a question about the possibility of an issue about the MySQL database.
Is it possible to write SQL to get the data from several tables and then use these data (what we get) to updated a new table?
Also, this work should be scheduled daily.
The reason why I ask this question is because I am in this situation:
Our IT department has maintained a big database, but the database/tables are not meet our department's business need (we only have read permission). Our department has a small database (have all the permission), which we can use custom SQL to create some special table and updated them by daily.
So go back to the question, it is possible to set up the SQL and schedule it to make sure these SQL keep updating our tables?
Thank you so much!!!
Is it possible to write SQL to get the data from several tables and
then use these data (what we get) to updated a new table?
Yes it is possible. You can use a UPDATE .. JOIN construct to get the data from several table using SELECT statement and then JOIN with that inline view and perform the update operation to your other table.
Example:
UPDATE Your_Table a
JOIN (
//Select query to get data from multiple other tables
) xxx ON a.some_column = xxx.some_matching_column
SET a.column_c = xxx.column_c;
Also, this work should be scheduled daily
Sure, use MySQL Event Schedular
I'm trying to make a split Access db with the backend as linked Sharepoint lists in Office 365.
When I try to add data using one list that is my 'Locality' reference I get the following error;
You cannot reference rows created when you are disconnected from the server because this violates the lookup settings defined for this table or list. Please reconnect all tables with the server and try again.
There seems to be a 5000 row limit on lists in O365!
If I delete most of the list so it is under 5000 rows, it works fine.
http://www.csgpro.com/post/110085
I'm not trying to view it - just use it as a reference.
Images here;
Screenshots in Dropbox
You cannot execute an update, or operation on a office 365 linked SP table unless indexing can be used to reduce the update below 5000 records. This means that if the criteria used IS NOT indexed, then a full table scan occurs and you not be able to update even 50 records.
Worse is during the up-loading process there is a bug/issue in that any index set will NOT BE SET if the initial table upload was greater 5000 records. So tables can grow beyond 5000 records, and you can/should be able to execute a delete on say 100 records. However if an index cannot be used to grab the 100 records, then you get errors.
So you are correct the issue is to do with 5000 record limit. Try using the PK as a criteria (range) – you likely find the update works. (the PK is indexed even for up-loaded tables > 5000 records).
I have a database that is used to create a data file for a customer. The database has two linked tables that are linked to tables that are both in another database (but within the same database as each other). Their structure is identical.
I have a union query set up to combine both linked tables.
I am using a Macro to export that query but after running for a short while I get the error "The query cannot be completed. Either the size of the query set....."
Does Access have a limitation on union query size? Combined, the tables are a lot of data but I'm confused as both the tables I'm combining are in the same database.
It transpires that in my case I should be using Union All instead of Union as there will never be duplicates in the two tables I'm combining.
It appears that the de-duplication of the tables is what was taking up memory.
I'm getting data from an MSSQL DB ("A") and inserting into a MySQL DB ("B") using the date created in the MSSQL DB. I'm doing it with simple logics, but there's got to be a faster and more efficient way of doing this. Below is the sequence of logics involved:
Create one connection for MSSQL DB and one connection for MySQL DB.
Grab all of data from A that meet the date range criterion provided.
Check to see which of the data obtained are not present in B.
Insert these new data into B.
As you can imagine, step 2 is basically a loop, which can easily max out the time limit on the server, and I feel like there must be a way of doing this must faster and during when the first query is made. Can anyone point me to right direction to achieve this? Can you make "one" connection to both of the DBs and do something like below?
SELECT * FROM A.some_table_in_A.some_column WHERE
"it doesn't exist in" B.some_table_in_B.some_column
A linked server might suit this
A linked server allows for access to distributed, heterogeneous
queries against OLE DB data sources. After a linked server is created,
distributed queries can be run against this server, and queries can
join tables from more than one data source. If the linked server is
defined as an instance of SQL Server, remote stored procedures can be
executed.
Check out this HOWTO as well
If I understand your question right, you're just trying to move things in the MSSQL DB into the MySQL DB. I'm also assuming there is some sort of filter criteria you're using to do the migration. If this is correct, you might try using a stored procedure in MSSQL that can do the querying of the MySQL database with a distributed query. You can then use that stored procedure to do the loops or checks on the database side and the front end server will only need to make one connection.
If the MySQL database has a primary key defined, you can at least skip step 3 ("Check to see which of the data obtained are not present in B"). Use INSERT IGNORE INTO... and it will attempt to insert all the records, silently skipping over ones where a record with the primary key already exists.
So here is my situation: I have a vendor supplied DB we cannot modify and a custom db that imports data from the vendor app and acts on it. Once records are imported form the vendor app, they cannot appear on the list of records to be imported. Also we only want to display the 250 most recent records that have not been imported.
What I originally started with was select the list of ids that have been imported from the custom db, and then query the vendor db, using the list of ids in a .Where(x => !idList.Contains(x.Id)) clause on the remote query.
This worked up until we broke 2100 records imported into the custom db, as 2100 is the limit on the number of parameters that can be passed into SQL. After finding out this was the actual problem and not the 'invalid buffer'/'severe error' ADO.Net reported, my solution was to remove the first 2000 ids in the remote query, and then remove the remaining records in the local query.
Having to pull back a large number of irrelevant records, just to exclude them, so I can get the correct 250 records seems very inelegant. Is there a better way to do this, short of doing a cross db stored procedure?
Thanks in advance.
This might not be the best answer, depending on how many records you're dealing with, but you could force the SQL to execute and just deal with it as in-memory objects. Calling the ToList() method will execute the SQL and convert to an IEnumerable .
What I might suggest is to have started by querying the vendor database first ordering the results by some kind of criteria (perhaps a date field, oldest to most recent).
You could do a Skip().Take() to "skim" the results and then take each bulk set and insert them into the custom db where the ID doesn't already exist. That way you avoid the problem you have now.
If you have db-create access to the SQL Server that the vendor's db is running on (or if your custom db is on the same server), you could create a "has been imported" table in a different database on that same server, and then write a stored proc that does a cross-database join of that table against the vendor db, e.g.:
select top 250 from vendordb.to_be_imported
where not exists
(select 1 from customdb.has_been_imported where idWasImported = idToBeImported)
order by whatever;
You might even be able to do this in Linq 2 SQL -- I've never tried adding objects from different databases into a single DataContext...