How to apply a formula for removing data noise in R? - function

I am working on NGSim Traffic data, having 18 columns and 1180598 rows in a text file. I want to smooth the position data, in the column 'Local Y'. I know there are built-in functions for data smoothing in R but none of them seem to match with the formula I am required to apply. The data in text file looks something like this:
Index VehicleID Total_Frames Local Y
1 2 5 35.381
2 2 5 39.381
3 2 5 43.381
4 2 5 47.38
5 2 5 51.381
6 4 8 504.828
7 4 8 508.325
8 4 8 512.841
9 4 8 516.338
10 4 8 520.854
11 4 8 524.592
12 4 8 528.682
13 4 8 532.901
14 5 7 39.154
15 5 7 43.153
16 5 7 47.154
17 5 7 51.154
18 5 7 55.153
19 5 7 59.154
20 5 7 63.154
The above data columns are just example taken out of original file. Here you can see 3 vehicles, with vehicle IDs = 2, 4 and 5 but in fact there are 2169 vehicles with different IDS. The column Total_Frames tell us how many times vehicle Id of each vehicle is repeated in the first column, for example in the table above, vehicle ID 2 is repeated 5 times, hence '5' in Total_Frames column. Following is the formula I am required to apply to remove data noise (smoothing) from column 'Local Y':
Smoothed Position Value = (1/(Summation of [EXP^-abs(i-k)/delta] from k=i-D to i+D)) * ( (Summation of (Local Y) *[EXP^-abs(i-k)/delta] from k=i-D to i+D))
where,
i = index #
delta = 5
D = 15
I have tried using the built-in functions, which I know of, but they don't smooth the data as required. My question is: Is there any built-in function in R which can do the data smoothing in the way of given formula or which could take this formula as an argument? I need to apply the formula to every value in Local Y which has 15 values before and 15 values after them (i-D and i+D) for same vehicle Id. Can anyone give me any idea how to approach the problem? Thanks in advance.

You can place your formula in a function and then use the apply function of R to apply it to the elements in your "Local Y" column of the dataframe

Related

How to reduce redundant cells in a column containing logged data

Is there a function to reduce the amount of redundant data from one column to match the number of cells in a second column?
I have logged data from two sensors that sent values at different rates. in 8 hours, I collected 11857 values for the first sensor and 8130 for the second one.
I need to compress the first column by deleting data to match the number of cells on the second column, so I can display synchronized values on a chart.
It is not a matter of cutting 3727 cells from the head or tail of the first column, but to delete cells in a proportional way.
I've tried using de Modulus function, but it does not give me the right amount of compression; e.g., by running =MOD(A1,3) and then filtering cells containing '0' value and deleting those rows, I get 7905, which is close to 8130 but still, the data is shifted out.
Edit:
I found a method that requires several steps:
Copy the sensors' data into two columns
Get the number of cells for both columns using COUNTA
Get the ratio between the smaller count over the bigger count
In a new column, create an index for the rows using =INT(ROW()*ratio)
Remove duplicate rows using the index column as the reference with Data > Remove Duplicates
It works, but it will be much faster if there was a ready-made function that will run over the provided data columns and copy the values into two new columns
I tested this solution in LibreOffice Calc. The functions used are basic enough to be found in Excel as well.
Here's a sample with data from 2 sensors, s1 and s2, similar to yours:
Row s1 s2
1 2 3
2 4 6
3 6 9
4 8 12
5 10 15
6 12 18
7 14 21
8 16
9 18
10 20
11 22
What I did was match the data from s1 samples with those from s2 that relatively match the position of the first, so instead of ending up with a number of rows with no s2 values, I padded non-existent s2 values with the last sample taken for any given period of time (column s2a)
Row s1 s2 s2a
1 2 3 3
2 4 6 6
3 6 9 6
4 8 12 9
5 10 15 12
6 12 18 12
7 14 21 15
8 16 18
9 18 18
10 20 21
11 22 21
Assuming that s1 is column A and s2 is column B in the spreadsheet, the function you want on each cell of the new column is:
=INDIRECT( ADDRESS( CEILING( ROW()* COUNT(B:B)/COUNT(A:A)),2))
Let's go from bottom to top:
COUNT(B:B)/COUNT(A:A) - this is the ratio. 0.63' above. It indicates that each sample in any given row in s1 will be found at that row x 0.63 in column s2.
Ceiling - Spreadsheets don't start at row 0, so the first one HAS to be 1. I experimented with Int(), but if the ratio were less than 0.5 we would end up with a 0, which we don't want.
Address - Returns a string with the address of a cell given its row,column coordinates (e.g. Address(3, 2) = "B3" and Address(3,2,2) as used here, will yield an absolute column or "$B3").
Indirect - Returns the contents of a cell whose address is passed as a string (e.g. Address("x5") will return whatever value is stored in cell X5).
Alex

Stop duplicated indexing

I am trying to stop duplicate entry's into my database (below). eg it will come up with an error message if the vechID, Collection date, and return date is the same. I am opening my table in design view and clicking indexes and then indexing the relevant fields. but it wont work let me and keeps saying no due to duplicate values. is this the correct method
Booking ID VechID CuID Collection date Return date
1 3 7 01/07/2017 10/07/2018
2 1 7 23/04/2017 16/05/2018
3 2 1 17/05/2017 28/05/2018
4 4 2 15/05/2017 20/05/2018
5 5 2 01/06/2017 24/06/2018
6 6 2 22/07/2017 29/08/2018
7 4 8 01/07/2017 15/07/2018
8 8 8 01/08/2017 20/08/2018
9 8 2 21/01/2017 20/01/2018
10 4 8 25/09/2017 02/10/2018
13 8 8 25/09/2017 02/10/2018
Yes, you need to create a unique index on the fields (vechID, Collection date, return date).
Of course you can't do that if you already have data in your table that violates this unique index.
Use the query wizard for Duplicate Search to find and delete them.

Pandas: flattening repeating/wrapped columns in csv file

It often happens that data will be given to you with wrapped columns. Consider, for example:
CCY Decimals CCY Decimals CCY Decimals
AUD/CAD 5 EUR/CZK 4 GBP/NOK 5
AUD/CHF 5 EUR/DKK 5 GBP/NZD 5
AUD/DKK 5 EUR/GBP 5 GBP/PLN 5
AUD/JPY 3 EUR/HKD 5 GBP/SEK 5
AUD/NOK 5 EUR/HUF 3 GBP/SGD 5
...
Which should be parsed as a dataframe of two columns (CCY and Decimals), not six. My question is, what is the most idiomatic way of achieving this?
I would have wanted something like the following:
data = pd.read_csv("file.csv")
data.groupby(axis=1,by=data.columns.map(lambda s: s.replace("\..",""))).\
apply(lambda df : df.values.flatten())
When reading the csv file we end up with columns CCY,Decimals,CCY.1,Decimals.1 .. etc. The groupby operation returns a collection of data frames:
<pandas.core.groupby.DataFrameGroupBy object at 0x3a52b10>
Which we would then flatten using numpy functionality. So we would are converting DataFrames with repeating columns into Series, and then merging these into a result DF.
However, this doesn't work. I've tried passing the different keys arguments to groupBy, but it always complains about being unable to reindex non-unique columns.
There are a number of existing questions that deal with flattening groups of columns (e.g. "Flattening" output of group.nth in Pandas), but I can't find any that do this for repeating columns.
To use groupby, I'd do:
>>> groups = df.groupby(axis=1,by=lambda x: x.rsplit(".",1)[0])
>>> pd.DataFrame({k: v.values.flat for k,v in groups})
CCY Decimals
0 AUD/CAD 5
1 EUR/CZK 4
2 GBP/NOK 5
3 AUD/CHF 5
4 EUR/DKK 5
5 GBP/NZD 5
6 AUD/DKK 5
7 EUR/GBP 5
8 GBP/PLN 5
9 AUD/JPY 3
10 EUR/HKD 5
11 GBP/SEK 5
12 AUD/NOK 5
13 EUR/HUF 3
14 GBP/SGD 5
[15 rows x 2 columns]
and then sort.

Add together grouped rows into one value

I've got an issue where I've been presented data in this format from SQL, and have directly imported that into SSRS 2008.
I do have access to the stored procedure for this report, however I don't want to change it as a few other reports rely on it.
Project HoursSpent Cost
1 5 45
1 8 10
1 7 25
1 5 25
2 1 15
2 3 10
2 5 15
2 6 10
3 6 10
3 4 5
3 4 10
3 2 5
I've been struggling all morning to understand how/when I should be implementing the SUM() function with this.
I have tried already to SUM() the rows, but it still outputs the above result.
Should I be adding any extra groups?
Ideally, I need to have the following output:
Project HoursSpent Cost
1 25 105
2 15 40
3 16 30
EDIT: Here is my current structure:
"LineName" is a group for each project
You have to add a row group on "Project" since you want to sum up the data per each project.

How to find matrix element with sql query?

I have array and table that I referenced some elements in array. Like my array
1 2 3 4 5 6
7 8 9 10 11 12
13 14 15 16 17 18
19 20 21 22 23 24
And I have area like Start point s=9,X=2,Y=2,Row Count R=6
then I have boxes 9,10,11,15,16,17,21,22,23
Now I am trying to write some sql that check if 16 number in this area.I created some logic like if ((s<16<s+X) || (s+6<16<s+x+6) || (s+12<16<s+x+12) ) but should I write it in one sql query? I am using mySql.
This doesn't have anything to do with SQL, I don't think, but something like the following condition is probably what you want. Since your example has the same value of X and Y, and "Row Count" sounds more like "number of rows" than "number of items in a row," I may have gotten rows and columns backwards from what you want.
set #s=9, #x=2, #y=5, #R=6, #testval=16;
(#testval-1)/#R between (#s-1)/#R and (#s-1)/#R - #y - 1
and (#testval-1)%#R between (#s-1)%#R and (#s-1)%#R - #x - 1