in a database I have a main table with parts and related quantities in warehouse, and many little tables with parts and related quantities to mount the device (that is the single table name).
I wish to build a query that asks for the device name and gives me the codes and the quantities stocked in warehouse (and maybe a column with the difference between the requested qty and the available one in warehouse):
Do you think it's possible?
Related
I've been tasked with creating a database to help with my company's annual physical inventory count. We have an ERP system that contains all our part numbers, and all of our part numbers come in boxes that have a carton label containing "part number" and "quantity" which are both barcoded. We are trying to switch to a system in which we scan all the boxes into my access database vs. our previous system of physically counting and writing tags. So far everything works great, the only issue is that some bar codes get damaged or just won't scan properly for some reason, so the scanners read the barcode as a part number that doesn't exist. We are able to export a list of all part numbers from our ERP system into an excel file which i have linked to my database that we use to see which part numbers were entered that don't exist in the system. My question is, is there a way to prevent people from scanning part numbers that don't match any part number in the excel file exported from our ERP system? I'm using ms access 2003
Using a combobox with LimitToList property set to yes is a great solution. Just to let you know that with this solution it is still possible to PASTE values that are NOT on the list. If you want to guarantee that it is not possible that the field contains wrong values, the best is that you create a Relationship (with referential integrity) taking the Table field containing all valid part numbers as the master field, and the one taking the scanned value as the slave field.
My MS Access query
SELECT MSysObjects.Type, MSysObjects.Name
FROM MSysObjects
WHERE (((MSysObjects.Type)=1) AND ((MSysObjects.Name) Like "{*"));
is showing >3,000 local tables whose names begin with "{". I didn't knowingly create these.
E.g.
Type Name
1 {00191663-6977-4C13-A56F-0E0A36697A81}
1 {00191663-6977-4C13-A56F-0E0A36697A81}_shadow
1 {001E812C-A324-40AF-B3F8-9703969260B5}
1 {001E812C-A324-40AF-B3F8-9703969260B5}_shadow
My database is surprisingly large. I tried doing a compact/repair and these remain.
I am linking to a number of SharePoint lists. Are these tables needed by SharePoint? Or, can I safely delete these?
I'm using Microsoft Office 365 ProPlus Access, Access version 16.0.11929.20978.
These are SharePoint list IDs.
{00191663-6977-4C13-A56F-0E0A36697A81} is called a GUID. SharePoint uses GUIDs to identify lists (while a name is shown, what's actually used is a GUID).
I would not recommend deleting these. If size becomes a problem, note that there are multiple ways you can cache SharePoint tables, and some cache a large amount of data. You can toggle this per database, under Options -> Current database:
I'm trying to develop a new reporting module for a resource management tool (PHP+Mysql).
I am trying to extract data in the following format from mysql:
I have a table that consists of date and location of multiple people(i.e Office, Home or Client).
Sample Data as in DB.
here date_plotted means the date at which the user is engaged and plotting_date represents when this particular entry was made in the system(the date). So User was plotted to be in office on 30th Oct and the same entry was made on 30th Oct.
Data as in resource table
The resource table represents the user table.
Any suggestions on how to do the same in mysql?
These are the primary tables which needs to be used.
The above table id done in excel for now to represent the outcome.
I'm new to SQL so haven't tried anything yet.
There is a tool for Windows that might simplify this operation. It's made by MySQL and called MySQL for Excel. In theory it should allow you to structure and make changes to MySQL databases as well as perform queries that result in spreadsheets.
Without knowing more about your data, for example being supplied an actual csv file to work with, and the parameters of the actual pull, whether it's fix dates always or if this is a dynamic pull based on a range this question could result in 100 different implementations that visually return similar results, but have massively different requirements overhead-wise in implementation.
I have an LDAP CSV file that is imported nightly and dumped into my MYSQL database. It has about 70000 employee records.
Included in that is empl#, email, group, supervisor, etc.
I have reports that are being generated from various web sites. We are dumping these reports in the database once a month. These reports usually have empl#, email, hits, logins, whatever...
My goal is to combine the report data and add in things like group, supervisor, etc based on empl#... Speed is a big concern because of the size of the database and number of users.
At first I thought of making a simple left join (given that report data is left - and that all people in the report may not be an employee). However the problem with that is that it does not take a snapshot in time. If report data from 6 months ago is viewed I don't want it mixed with current employee data - I want it to stay a snapshot in time.
What is the best way to handle this?
You will need a date column of some kind in both sets of data on which to join. Once you have that, you can simply put a condition that establishes the snapshot in the WHERE that limits the selection.
I'm going to do my best to try to explain this. I currently have a data flow task that has an OLE DB Source transferring data from a table from a different database to a table to another database. It works fine but the issue I'm having is the fact that I keep adding duplicate data to the destination table.
So a CustomerID of '13029' with an amount of '$56.82' on Date '11/30/2012' is seen in that table multiple times. How do I make it so I can only have unique data transferring over to that destination table?
In the dataflow task, where you transfer the data, you can insert a Lookup transformation. In the lookup, you can specify a data source (table or query, what serves you best). When you chose the data source, you can go to the Columns view and create a mapping, where you connect the CustomerID, Date and Amount of both tables.
In the general view, you can configure, what happens with matched/non matched row. Simply take the not matched output and direct it to the DB destination.
You will need to identify what makes that data unique in the table. If it's a customer table, then it's probably the customerid of 13029. However if it's a customer order table, then maybe it's the combination of CustomerId and OrderDate (and maybe not, I have placed two unique orders on the same date). You will know the answer to that based on your table's design.
Armed with that knowledge, you will want to write a query to pull back the keys from the target table SELECT CO.CustomerId, CO.OrderId FROM dbo.CustomerOrder CO If you know the process only transfers data from the current year, add a filter to the above query to restrict the number of rows returned. The reason for this is memory conservation-you want SSIS to run fast, don't bring back extraneous columns or rows it will never need.
Inside your dataflow, add a Lookup Transformation with that query. You don't specify 2005, 2008 or 2012 as your SSIS version and they have different behaviours associated with the Lookup Transformation. Generally speaking, what you are looking to do is identify the unmatched rows. By definition, unmatched means they don't exist in the target database so those are the rows that are new. 2005 assumes every row is going to match or it errors. You will need to click the Configure Error Output... button and select "Redirect Rows". 2008+ has an option under "Specify how to handle rows with no matching entries" and there you'll want "Redirect rows to no match output."
Now take the No match output branch (2008+) or the error output branch (2005) and plumb that into your destination.
What this approach doesn't cover is detecting and handling when the source system reports $56.82 and the target system has $22.38 (updates). If you need to handle that, then you need to look at some change detection system. Look at Andy Leonard's Stairway to Integration Services series of articles to learn about options for detecting and handling changes.
Have you considered using the T-SQL MERGE statement? http://technet.microsoft.com/en-us/library/bb510625.aspx
It will compare both tables on defined fields, and take an action if matched or not.