I have 8 queries all with the same design etc to make a new table but for different criteria's and would like to append them into one single table.
Is there any way with VBA code or possibly UNION to do this?
SELECT tbl_SCCMQ.CONTRACT_ACCOUNT_NUMBER, tbl_SCCMQ.BP_Partner, tbl_SCCMQ.CONTRACT_NUMBER, tbl_SCCMQ.BILL_TO_DATE, tbl_SCCMQ.CONTRACT_START_DATE, tbl_SCCMQ.AGEING_DATE, tbl_SCCMQ.DateDiff, tbl_SCCMQ.PAYMENT_TYPE, tbl_SCCMQ.BP_Type, tbl_SCCMQ.[Next Bill Due Date], tbl_SCCMQ.[BAND], tbl_SCCMQ.RAG, tbl_SCCMQ.BILL_STATUS INTO tbl_01_Resi_CCQ_R1_4_Never_Billed_NoSS
FROM tbl_SCCMQ
WHERE (((tbl_SCCMQ.BP_Type)="B2C") AND ((tbl_SCCMQ.RAG) Like "R*") AND ((tbl_SCCMQ.BILL_STATUS)="First") AND ((tbl_SCCMQ.BILL_BLOCK) Is Null) AND ((tbl_SCCMQ.BILL_LOCK) Is Null) AND ((tbl_SCCMQ.INVOICE_LOCK) Is Null));
Here are two tables,
qry_01_Resi_CCQ_R1_4_Never_Billed_NoSS
qry_02_SME_CCQ_R1_4_Never_Billed_NoSS
and would like them all importing into main table "Data"
I am quite new to Access and VBA etc.
your question looks like you know how to solve the problem.
Note: queries 1 to 8 must have the same number of fields and datatypes must be consistent for each field's ordinal position (asserted in your question.)
SQL syntax to create a new table (Data) from the queries:
select *
INTO Data
from (
select * from query1
union all
select * from query2
union all
...
union all
select * from query8
) as queryData
or
SQL syntax to append data to existing table:
INSERT INTO Data
select *
from (
select * from query1
union all
select * from query2
union all
...
union all
select * from query8
) as queryData
VBA syntax to run the query in code:
dim db as dao.database: set db = Currentdb
dim strSQL as string
strSQL = "...." ' as above
db.execute strSQL
Related
In table1, I have the field Pianificato and a lot of columns with name Wxx_yyyy.
I need to update Table1.Pianificato as sum of all the W-columns that satisfy a certain criteria.
The criteria that I should apply, is that xx is higher than a certain value.
In this example, xx > 2. So Table1.W01_2018 and Table1.W02_2018 will not be considered in the sum.
I don't think that this complex request can be satisfied by a Query. So I think the only way is VBA.
If you want to avoid VBA and there are no more than 50 Wxx fields, a UNION can rearrange fields. There should be a unique identifier field - an autonumber will serve. Since I suspect there are 52 Wxx fields and you don't want W01_2018 and W02_2018 fields, exclude those 2.
SELECT ID, W03_2018 AS Data, "W03_2028" AS WkYr FROM tablename
UNION SELECT ID, W04_2018, "W04_2018" FROM tablename
. . .
UNION SELECT ID, W52_2018, "W52_2018" FROM tablename;
Then use that query in an aggregate query:
SELECT ID, Sum(Data) AS Planificato FROM UnionQuery GROUP BY ID;
Issue arises when you want to do calculations for a different set of weeks and/or year - have to modify the UNION.
A VBA approach may be more desirable, like:
Sub CalcPlanificato()
Dim rs As DAO.Recordset
Dim lngP As Long, x As Integer
Set rs = CurrentDb.OpenRecordset("SELECT * FROM Table1;")
While Not rs.EOF
For x = 3 To 52
lngP = lngP + rs.Fields(x)
Next
rs.Edit
rs!Planificato = lngP
rs.Update
rs.MoveNext
lngP = 0
Wend
End Sub
Code assumes fields are in order in table as shown in example. Assumes Planificato is the first field. Assumes there are 52 Wxx fields. Wxx fields are referenced by index and W03_2018 is in column 4 which is index 3.
I am trying to get the average of the columns in my table and then insert the averages into a second table, I have over 30 columns so I would rather not have to do them all individually if possible
command.CommandText = " INSERT INTO FaceAverages(`rightEyeRightUpper`),(`rightEyeLeftUpper`),(`rightEyeRightLower`),(`rightEyeLeftLower`),(`leftEyeRightUpper`)... FROM(SELECT AVG (rightEyeRightUpper),(rightEyeLeftUpper),(rightEyeRightLower),(`rightEyeLeftLower`)... FROM 'FaceDistancesHappy')";
Assuming you table FaceAverages contains the coulmns
`rightEyeRightUpper`,`rightEyeLeftUpper`,`rightEyeRightLower`,
`rightEyeLeftLower`,`leftEyeRightUpper`...
then you could use an insert select as
command.CommandText = " INSERT INTO FaceAverages(`rightEyeRightUpper`,`rightEyeLeftUpper`,
`rightEyeRightLower`, `rightEyeLeftLower`,`leftEyeRightUpper`... )
SELECT AVG(rightEyeRightUpper), AVG(rightEyeLeftUpper),
AVG(rightEyeRightLower), AVG(`rightEyeLeftLower`), ...
FROM 'FaceDistancesHappy'";
declaring all the column you want insert crresponding to all the column you want select . and using the AVG() function for each column in select
What is the best approach to combine multiple MySQL tables in R? For instance, I need to rbind 14 large `MySQL tables (each >100k rows by 100 columns). I tried the below approach, which consumed most of my memory and got time out from MySQL. I am wondering if there is alternative solution? I do not need to fetch the whole table, just need group the whole table by a couple of variables and calculate some metrics.
station_tbl_t <- dbSendQuery(my_db, "select * from tbl_r3_300ft
union all
select * from tbl_r4_350ft
union all
select * from tbl_r5_400ft
union all
select * from tbl_r6_500ft
union all
select * from tbl_r7_600ft
union all
select * from tbl_r8_700ft
union all
select * from tbl_r9_800ft
union all
select * from tbl_r10_900ft
union all
select * from tbl_r11_1000ft
union all
select * from tbl_r12_1200ft
union all
select * from tbl_r13_1400ft
union all
select * from tbl_r14_1600ft
union all
select * from tbl_r15_1800ft
union all
select * from tbl_r16_2000ft
")
Consider iteratively importing MySQL table data and then row bind with R. And be sure to select needed columns to save on overhead:
tbls <- c("tbl_r3_300ft", "tbl_r4_350ft", "tbl_r5_400ft",
"tbl_r6_500ft", "tbl_r7_600ft", "tbl_r8_700ft",
"tbl_r9_800ft", "tbl_r10_900ft", "tbl_r11_1000ft",
"tbl_r12_1200ft", "tbl_r13_1400ft", "tbl_r14_1600ft",
"tbl_r15_1800ft", "tbl_r16_2000ft")
sql <- "SELECT Col1, Col2, Col3 FROM"
dfList <- lapply(paste(sql, tbls), function(s) {
tryCatch({ return(dbGetQuery(my_db, s))
}, error = function(e) return(as.character(e)))
})
# ROW BIND VERSIONS ACROSS PACKAGES
master_df <- base::do.call(rbind, dfList)
master_df <- plyr::rbind.fill(dfList)
master_df <- dplyr::bind_rows(dfList)
master_df <- data.table::rbindlist(dfList)
I have the following SQL, which gives me the error that this union tabled called brokeredTable is not updateable.
UPDATE (SELECT chid,brokered,bid,uid,rate FROM spot_channels UNION SELECT tid,brokered,bid,uid,rate FROM tremor_tags) as brokeredTable SET brokered = 1, rate = 5, bid = 5, uid = 7 WHERE chid = 110399
As you can see the SQL is pretty simple, instead of running two update statements on two different tables I wanted to Union them into one set and then run the update against that set of data. Which apparently I cannot do this way.
Any Suggestions? Again I just want one SQL statement to accomplish this.
"The SQL UNION operator combines the result of two or more SELECT statements"
That query doesn't even have the UPDATE sintax. UPDATE table SET column = 'value'.
You should search before ask:
Performing an UPDATE with Union in SQL
I have 4 tables, from which i select data with help of joins in select query...I want a serial no.(row number) per record as they are fetched. first fetched record should be 1, next 2 and so on...
In oracle the equiavelent in RowNum.
The answer by Brettski is ASP flavored and would need a lot of editing.
SELECT DCOUNT("YourField","YourTable","YourField <= '" & [counter] & "'")
AS RowNumber,
YourField as counter FROM YourTable;
Above is your basic syntax. You are likely to find this runs very slow. My typical solution is a bucket table with Autonumber field. That seems kludgy, but it gives me control and probably in this case it allows speed.
Using a sorted Make Table Query in Access, I use the following (wouldn't work if you look at the Query, as that would increment the number when you don't want it to)....
setRowNumber 'resetting increment before running SQL
DoCmd.RunSQL ... , rowNumber([Any Field]) AS ROW, ...
'Increment Number: Used to create temporary sorted Table for export
Private ROWNUM As Long
'dummyField: must take an input to update in Query
Public Function rowNumber(ByVal dummyField As Variant, Optional ByVal incBy As Integer = 1) As Long
ROWNUM = ROWNUM + incBy 'increments before value is returned
rowNumber = ROWNUM
End Function
Public Function setRowNumber(Optional ByVal setTo As Long = 0) As Long
ROWNUM = setTo
setRowNumber = ROWNUM
End Function
With the following table
SET NOCOUNT ON
CREATE TABLE people
(
firstName VARCHAR(32),
lastName VARCHAR(32)
)
GO
INSERT people VALUES('Aaron', 'Bertrand')
INSERT people VALUES('Andy', 'Roddick')
INSERT people VALUES('Steve', 'Yzerman')
INSERT people VALUES('Steve', 'Vai')
INSERT people VALUES('Joe', 'Schmoe')
You can use a sub query to create the counting row:
SELECT
rank = COUNT(*),
a.firstName,
a.lastName
FROM
people a
INNER JOIN people b
ON
a.lastname > b.lastname
OR
(
a.lastName = b.lastName
AND
a.firstName >= b.firstName
)
GROUP BY
a.firstName,
a.lastName
ORDER BY
rank
The problem with this method is that the count will be off if there are duplicates in your results set.
This article explains pretty well how to add a row counting column to your query.