MS Access - Division by zero + Nz() - ms-access

I have two cross tab queries (see below for structure). Pretty simple. The first takes the total of each building type that my company owns in each city, and the second takes the total of ALL (not just company owned) buildings by type in the entire city.
All I want to do is calculate a percentage, but I am having a lot of trouble. I think I am pretty close though, but for some reason, my Nz() function isn't working properly. I keep getting the "Division by zero error." Here's my percent formula:
DCount(
"[ID]","[Company_owned]") / DCount(
"[ID]","[City_Totals]", "[Year_built]=2000" & Nz(Year_built, "null")
)
)
Here is the layout of my cross tab queries.
1)
╔═══════════════════════════════════════════════════════════════════════════════╗
║ Building type: 1 2 3 4 5 6 7 ║
╠═══════════════════════════════════════════════════════════════════════════════╣
║ City Atlanta 0 7 0 2 3 4 9 ║
║ New York 0 0 2 5 7 8 2 ║
║ San Francisco 1 1 2 3 4 5 6 ║
╚═══════════════════════════════════════════════════════════════════════════════╝
2)
╔═══════════════════════════════════════════════════════════════════════════════╗
║ Building type: 1 2 3 4 5 6 7 ║
╠═══════════════════════════════════════════════════════════════════════════════╣
║ City Atlanta 8 9 3 2 3 7 9 ║
║ New York 0 0 2 7 7 9 2 ║
║ San Francisco 3 1 9 3 5 5 8 ║
╚═══════════════════════════════════════════════════════════════════════════════╝
Can someone please tell me why I am getting the "Division by zero" error and whether or not this is a sound strategy for calculating the percentages from data in two cross tab queries (I have also considered doing all of the percentage calculations in the report, but this seems a little more tedious)

I'm guessing a bit here, but I think what you are looking for is something more like this:
DCount("[ID]","[Company_owned]") / _
DCount("[ID]","[City_Totals]", "[Year_built]" & _
IIf(IsNull(Year_built), " Is Null", "=" & Year_built))
Note: Leave off the line continuation chars (_) and just run each line together if you are doing this in a query.
I think the reason you are having trouble is because the second criteria you wrote was evaluating to something like this: [Year_built]=20002008 or this [Year_built]=2000null.
Even if leaving the 2000 in was just a typo in your question, this: [Year_built]=null would still not do what you seem to expect. You need the Is Null statement in that case.

Related

Convert CSV to nested object back to CSV based off previous and need data in PowerShell

First off, that title may be a little confusing, and not sure how to word it. Please edit it if you need to.
I am tasked with getting logs from a printer, and creating a cumulative report. The printer does not have the ability to supply date ranges, so I need to factor that in. My approach is to take the CSV, and then convert it to a nested object (XML or JSON), so each user can have it's printer's properties. Then I need to turn it back to a CSV and have the different months. If a CSV could not work, maybe I can use a XML, which I could leverage in PowerShell and create some custom report in HTML.
For example, this is what one month would look like
PrinterLog-Sep10.csv
User Name Dept. ID Color Total Black & White Total Total Prints
--------- -------- ----------- ------------------- ------------
Mary Smith 1002 3 3 6
Kevin Hart 1006 3 2 5
Jeff Davis 1004 4 0 4
John Doe 1001 0 0 0
Joe Dirt 1003 0 0 0
Jane Jones 1005 0 0 0
I also need to factor in that additional users may be added at anytime. So the following month could be like this
PrinterLog-Oct19.csv
User Name Dept. ID Color Total Black & White Total Total Prints
--------- -------- ----------- ------------------- ------------
John Doe 1001 2 5 8
Joe Dirt 1003 7 15 8
Jeff Davis 1004 6 4 7
Mary Smith 1002 6 7 5
Will Smart 1007 32 12 43
Jane Jones 1005 3 14 2
Kevin Hart 1006 6 7 10
My approach has been using this foreach loop, but I cannot think of how to check for news, and keep the existing data.
foreach ($user in $canonCSV) {
$final += #{
$user.'User Name' = #{
"B&W" = $user.'Black & White Total'
"Color" = $user.'Color Total'
"Total" = $user.'Total Prints'
"Dept" = $user.'Dept. ID'
}
}
}
I was thinking maybe exporting the nested objects into a XML or JSON, but when I import it, not sure how to flatten it to a CSV. I tried using Compare-Object but that is not working correctly for additional users. I'm literally losing sleep over this, as I cannot think of a way to get this right. I'm sure it is something small or trivial, but everything is slipping my mind. Any help is greatly appreciated.

Birt report - count the number of times it matches a criteria

╔═══╦════════════╦═════════════╗
║id ║ TV# ║ Time ║
╠═══╬════════════╬═════════════╣
║ 1 ║ TV1 ║ 0 ║
║ 2 ║ TV2 ║ 10 ║
║ 3 ║ TV3 ║ 0 ║
║ 4 ║ TV3 ║ 20 ║
║ 5 ║ TV3 ║ 21 ║
║...║ ... ║ ... ║
╚═══╩════════════╩═════════════╝
I want to count the number of elements id, for each TV#, which time > 0.
In this case, I want the result to be:
TV1 - 0 ; TV2 - 1; TV3 - 2
I'm using BIRT Report, and I've tried different ways to get this, but I couldnt get what I want.
I've tried different ways, this is what I'm using at the moment:
Data Cube, Summary fields (measure)
Function: Count
Expression: measure["id"]
Filter: measure["Time"]>0
And then I'm using an Aggregation Builder:
Function:Count or Sum
Expression:measure["id"]
Filter: measure["Time"]>0
Aggregate on: GroupTV#
When I use count, this is returning: only 0s and 1s (it gives me "1" to TV# when there is at least one Time>0), ie TV1 - 0 ; TV2 - 1; TV3 - 1
When I use sum, this is returning: the number of times each TV# appears on the table (when there is at least one Time>0 for that channel), ie TV1 - no output ; TV2 - 1; TV3 - 3
Can someone help me?
E.g.:
SELECT tv
, SUM(CASE WHEN time > 0 THEN 1 ELSE 0 END) x
FROM my_table
GROUP
BY tv;
Either method should work, however in your example you're counting the occurrences of a particular id instead of 'TV#'.
Naturally as the id's are unique there will only ever be a maximum of one occurrence of each.
You need:
Function: Count
Expression: change from 'id' to 'TV#'
Filter: measure["Time"] > 0
Instead of using filters on Data Cube, or on Cross Tabs or on Charts, just filter on Data Sets.

How to normalize my tables?

I'm in a situation here. I want to normalize my table:
Exam_Papers
ID Country_Code Level
1 UK 1
2 UK 2
3 UK 3
4 UK 4
5 UK 5
6 UK 6
7 UK 7
8 SA 1
9 SA 2
10 SA 3
11 SA 4
12 SA 5
13 SA 6
14 SA 7
15 IN 1
16 IN 2
I understand that I could normalize this by putting Levels in a separate table, but then Country_Code would still contain duplicated data, so how normalized should this table be?
Such normalized that Country_Code and Level have their own table?
Also, in this example how is normalization beneficial because either way, making 2 separate tables would mean the FK's would still be duplicated (for example if UK had the ID of 1, my table would contain 7 1's)
Thanks in advance
You don't need to make two tables in this case. Because as you mentioned you will still require the entry (may be in their id format). Also, I feel if you do it the other way, your performance would degrade in terms of SQL queries.

How to apply a formula for removing data noise in R?

I am working on NGSim Traffic data, having 18 columns and 1180598 rows in a text file. I want to smooth the position data, in the column 'Local Y'. I know there are built-in functions for data smoothing in R but none of them seem to match with the formula I am required to apply. The data in text file looks something like this:
Index VehicleID Total_Frames Local Y
1 2 5 35.381
2 2 5 39.381
3 2 5 43.381
4 2 5 47.38
5 2 5 51.381
6 4 8 504.828
7 4 8 508.325
8 4 8 512.841
9 4 8 516.338
10 4 8 520.854
11 4 8 524.592
12 4 8 528.682
13 4 8 532.901
14 5 7 39.154
15 5 7 43.153
16 5 7 47.154
17 5 7 51.154
18 5 7 55.153
19 5 7 59.154
20 5 7 63.154
The above data columns are just example taken out of original file. Here you can see 3 vehicles, with vehicle IDs = 2, 4 and 5 but in fact there are 2169 vehicles with different IDS. The column Total_Frames tell us how many times vehicle Id of each vehicle is repeated in the first column, for example in the table above, vehicle ID 2 is repeated 5 times, hence '5' in Total_Frames column. Following is the formula I am required to apply to remove data noise (smoothing) from column 'Local Y':
Smoothed Position Value = (1/(Summation of [EXP^-abs(i-k)/delta] from k=i-D to i+D)) * ( (Summation of (Local Y) *[EXP^-abs(i-k)/delta] from k=i-D to i+D))
where,
i = index #
delta = 5
D = 15
I have tried using the built-in functions, which I know of, but they don't smooth the data as required. My question is: Is there any built-in function in R which can do the data smoothing in the way of given formula or which could take this formula as an argument? I need to apply the formula to every value in Local Y which has 15 values before and 15 values after them (i-D and i+D) for same vehicle Id. Can anyone give me any idea how to approach the problem? Thanks in advance.
You can place your formula in a function and then use the apply function of R to apply it to the elements in your "Local Y" column of the dataframe

Add together grouped rows into one value

I've got an issue where I've been presented data in this format from SQL, and have directly imported that into SSRS 2008.
I do have access to the stored procedure for this report, however I don't want to change it as a few other reports rely on it.
Project HoursSpent Cost
1 5 45
1 8 10
1 7 25
1 5 25
2 1 15
2 3 10
2 5 15
2 6 10
3 6 10
3 4 5
3 4 10
3 2 5
I've been struggling all morning to understand how/when I should be implementing the SUM() function with this.
I have tried already to SUM() the rows, but it still outputs the above result.
Should I be adding any extra groups?
Ideally, I need to have the following output:
Project HoursSpent Cost
1 25 105
2 15 40
3 16 30
EDIT: Here is my current structure:
"LineName" is a group for each project
You have to add a row group on "Project" since you want to sum up the data per each project.