is possible to create a map with 2 keys and a vector of values in Clojure? - csv

I am trying to create a program that reads in a table of temperatures from a csv file and would like to access a a collection of temperatures based on the year and day.
the first column stands for the year the tempratures have been recorded.
the second column stands for a specific day during each month .
the rest of the column represent the temperatures each month.
For example, 2021 - 23 - 119 = 23rd June 2021 has a temperature of 119
Year Day Months from January to December
2018 18 | 45 54 -11 170 99 166 173 177 175 93 74 69
2021 23 | 13 87 75 85 85 119 190 172 156 104 39 53
2020 23 | 63 86 62 128 131 187 163 162 138 104 60 70
So far I have managed to load the data from a CSV File with clojure.data.csv. this returns a sequence of vectors into the program
(defn Load_csv_file [filepath]
(try
(with-open [reader (io/reader filepath)]
(.skip reader 1)
( let [data (csv/read-csv reader)]
(println data) )
)
(catch Exception ex (println (str "LOL Exception: " (.toString ex))))
))
I am currently trying to figure out how to implement this but my reasoning was to create three keys in a map which will take in the year, day and vector of temperatures, to then filter for a specific value.
Any advice on how i can implement this functionality.
Thanks!

i would go with something like this:
(require '[clojure.java.io :refer [reader]]
'[clojure.string :refer [split blank?]]
'[clojure.edn :as edn])
(with-open [r (reader "data.txt")]
(doall (for [ln (rest (line-seq r))
:when (not (blank? ln))
:let [[y d & ms] (mapv edn/read-string (split ln #"\s+\|?\s+"))]]
{:year y :day d :months (vec ms)})))
;;({:year 2018,
;; :day 18,
;; :months [45 54 -11 170 99 166 173 177 175 93 74 69]}
;; {:year 2021,
;; :day 23,
;; :months [13 87 75 85 85 119 190 172 156 104 39 53]}
;; {:year 2020,
;; :day 23,
;; :months [63 86 62 128 131 187 163 162 138 104 60 70]})
by the way, i'm not sure csv format allows different separators (as you have in your example.. anyway this one would work for that)

I would create a map of data that looked something like this
{2020 {23 {:months [63 86 62 128 131 187 163 162 138 104 60 70]}}}
This way you can get the data out in a fairly easy way
(get-in data [2020 23 :months]
So something like this
(->> (Load_csv_file "file.csv")
(reduce (fn [acc [year day & months]] (assoc-in acc [year day] months)) {}))
This will result in the data structure I mentioned now you just need to figure out the location of the data you want

Related

ASC 122U NFC Reader compatible with EMV Contactless Cards

I am trying to read a EMV card using an APDU Command, however it seems the ACR122U external reader is blocking the APDU Command.
Select Command:
APDU-C -> 00 A4 04 00 0E 32 50 41 59 2E 53 59 53 2E 44 44 46 30 31 0E
APDU-R <- Error no response
Is it possible that the ACR122U reader is blocking the command ?
You want to SELECT FILE 1PAY.SYS.DDF01,
"Payment System Environment (PSE)"
To get the PSE directory and the card should response with Application Identifier (AID). but you set the LE=0E replace it to "00" .
Corrected APDU =>
PPSE = '00 a4 04 00 0e 32 50 41 59 2e 53 59 53 2e 44 44 46 30 31 00'
if The selection failed then the ADF doesn't exist (SW1/SW2=6A82)
if the selection is true then Application Identifier (AID) start command
Possible AID's:
A0000000031010
A0000000032020
A0000000041010
A0000000043060
AIDPrefix ='00 a4 04 00 07'

Writing multivariate objective function, with matrix data entities

S1=[20 32 44 56 68 80 92 104 116 128 140 152 164 176 188 200];
P=[16.82 26.93 37.01 47.1 57.21 67.32 77.41 87.5 97.54 107.7 117.8 127.9 138 148 158.2 168.3];
X = [0.119 0.191 0.262 0.334 0.405 0.477 0.548 0.620 0.691 0.763 0.835 0.906 0.978 1.049 1.120 1.192];
S = [2.3734 3.6058 5.0256 6.6854 8.6413 10.978 13.897 17.396 21.971 28.040 36.475 49.065 69.736 110.20 224.69 2779.1];
objective=#(x)((1250*x(3)*S(a)-(S(a)+x(2))*(P(a)+x(1)))/(1250*(S(a)+x(2))*(P(a)+x(1)))-x(5))^2+((x(2)*(P(a)^2+x(1)*P(a)))/(1250*x(4)*X(a)*x(3)-P(a)^2-x(1)*P(a))-S(a))^2+(74000/3*((X(a)*x(3)*S(a))/S1(a)*(S(a)+x(2)))-P(a))^2
%x0 = [Kp Ks mu.m Yp mu.d]
x0=[7.347705469 14.88611028 1.19747242 16.65696429 6.01E-03];
x=fminunc(objective,x0);
disp(x)
The code above is used for optimisizing the objective function, so that all the unknown values of the parameters can be found. As you may have seen, the objective function consists of 4 variables (S1, S, P, X), each having 16 data entities. My question is: how to create an objective function, so that all the data entities are utilised?
The final objective function has to be the sum of the objective function (shown above) with a=1:16. Any ideas?
Make the following changes to your code:
Replace, e.g. all S(a) variables with S to use the whole vector. Do the same for each of your four variables.
Convert all 'scalar' operations in your objective function to 'elementwise' ones, i.e. replace ^, * and / with .^, .* and ./. This produces 16 values, one for each index from 1 to 16 (i.e. what was previously referred to by a).
wrap the resulting expression into a sum() function to sum the 16 results into a final value
Use your optimiser as normal.
Resulting code:
S1 = [20 32 44 56 68 80 92 104 116 128 140 152 164 176 188 200];
P = [16.82 26.93 37.01 47.1 57.21 67.32 77.41 87.5 97.54 107.7 117.8 127.9 138 148 158.2 168.3];
X = [0.119 0.191 0.262 0.334 0.405 0.477 0.548 0.620 0.691 0.763 0.835 0.906 0.978 1.049 1.120 1.192];
S = [2.3734 3.6058 5.0256 6.6854 8.6413 10.978 13.897 17.396 21.971 28.040 36.475 49.065 69.736 110.20 224.69 2779.1];
objective = #(x) sum( ((1250.*x(3).*S-(S+x(2)).*(P+x(1)))./(1250.*(S+x(2)).*(P+x(1)))-x(5)).^2+((x(2).*(P.^2+x(1).*P))./(1250.*x(4).*X.*x(3)-P.^2-x(1).*P)-S).^2+(74000./3.*((X.*x(3).*S)./S1.*(S+x(2)))-P).^2 );
%x0 = [Kp Ks mu.m Yp mu.d]
x0 = [7.347705469 14.88611028 1.19747242 16.65696429 6.01E-03];
x = fminunc(objective,x0);
disp(x)
Note that you can make this code a lot clearer to read for humans; I just made the "direct" changes that illustrate conversion from your scalar expression to the desired vectorised one.

How do I query with a join getting all the data in Om Next?

In Om Next, when having data such as:
{:table {:name "Disk Performance Table"
:data [:statistics :performance]}
:chart {:name "Combined Graph"
:data [:statistics :performance]}
:statistics {:performance {:cpu-usage [45 15 32 11 66 44]
:disk-activity [11 34 66 12 99 100]
:network-activity [55 87 20 1 22 82]}}}
you can query it with:
[{:chart [{:data [:cpu-usage]}]}]
to get the chart, join the data and dig down cpu-usage from the performance record:
{:chart {:data {:cpu-usage [45 15 32 11 66 44]}}}
How do I get the whole performance record instead?
Another potential query is this:
[{:chart [:data]}]
but it doesn't resolve the join:
{:chart {:data [:statistics :performance]}}
There are no components as this is only about the data and the query. This is from the exercise number 2 and queries here: https://awkay.github.io/om-tutorial/#!/om_tutorial.D_Queries which uses om/db->tree to run the queries.
This is how you do it:
[{:chart [{:data [*]}]}]
which gives you:
{:chart {:data {:cpu-usage [45 15 32 11 66 44]
:disk-activity [11 34 66 12 99 100]
:network-activity [55 87 20 1 22 82]}}}
Without seeing the actual components with queries and idents, I can't be sure.
However, you should be able to query for [{:chart [:data]}]. See om/db->tree. Assuming that you have structured your components with the right queries and idents, om/db->tree converts your flat app state into a tree so that your read functions see the following data when called:
{:table {:name "Disk Performance Table"
:data {:cpu-usage [45 15 32 11 66 44]
:disk-activity [11 34 66 12 99 100]
:network-activity [55 87 20 1 22 82]}}
:chart {:name "Combined Graph"
:data {:cpu-usage [45 15 32 11 66 44]
:disk-activity [11 34 66 12 99 100]
:network-activity [55 87 20 1 22 82]}}}
If that query doesn't work, [{:chart [{:data [:cpu-usage :disk-activity :network-activity]}]}] should certainly do the trick.

convert plenty of json objects into dataframe R

I have plenty of json objects under the same format in one json file
like below. And I want to convert them into R dataframe and then to extract all the value of lantency.But when I enter the command
json_data <- fromJSON(file=json_flie)
only the first json object is stored in the dataframe, so what should I do???
Thanks!
{"task":[{"type":"ping","id":1,"value":" 159 159 152 153 149 147 150 151 148 149","IsFinished":true},{"type":"latency","id":2,"value":147,"IsFinished":true},{"type":"throughput","id":3,"value":"","IsFinished":false},{"type":"DNS","id":4,"value":12,"IsFinished":true}],"measurementTimes":10,"url":""}{"task":[{"type":"ping","id":1,"value":" 166 165 179 181 159 162 166 159 161 162","IsFinished":true},{"type":"latency","id":2,"value":159,"IsFinished":true},{"type":"throughput","id":3,"value":"","IsFinished":false},{"type":"DNS","id":4,"value":7,"IsFinished":true}],"measurementTimes":10,"url":""}{"task":[{"type":"ping","id":1,"value":" 172 172 159 160 159 159 159 158 160 162","IsFinished":true},{"type":"latency","id":2,"value":158,"IsFinished":true},{"type":"throughput","id":3,"value":"","IsFinished":false},{"type":"DNS","id":4,"value":14,"IsFinished":true}],"measurementTimes":10,"url":""}{"task":[{"type":"ping","id":1,"value":" 182 192 171 184 160 159 156 157 180 171","IsFinished":true},{"type":"latency","id":2,"value":156,"IsFinished":true},{"type":"throughput","id":3,"value":"","IsFinished":false},{"type":"DNS","id":4,"value":26,"IsFinished":true}],"measurementTimes":10,"url":""}{"task":[{"type":"ping","id":1,"value":" 158 186 168 189 190 233 168 160 188 157","IsFinished":true},{"type":"latency","id":2,"value":157,"IsFinished":true},{"type":"throughput","id":3,"value":"","IsFinished":false},{"type":"DNS","id":4,"value":1,"IsFinished":true}],"measurementTimes":10,"url":""}
Your input JSON is malformed, and has multiple elements "task" at the root level. This is akin to defining an XML document with more than one root, which is of course not allowed. If you create an outer element which contains an array of "task" elements, then you will be able to successfully load the file into R using fromJSON. Here is what the file should look like:
{
"root" : [
{
"task":
[
{"type":"ping","id":1,"value":" 159 159 152 153 149 147 150 151 148 149","IsFinished":true},
{"type":"latency","id":2,"value":147,"IsFinished":true},
{"type":"throughput","id":3,"value":"","IsFinished":false},
{"type":"DNS","id":4,"value":12,"IsFinished":true}
],
"measurementTimes":10,
"url":""
},
{
"task":
[
{"type":"ping","id":1,"value":" 166 165 179 181 159 162 166 159 161 162","IsFinished":true},
{"type":"latency","id":2,"value":159,"IsFinished":true},
{"type":"throughput","id":3,"value":"","IsFinished":false},\
{"type":"DNS","id":4,"value":7,"IsFinished":true}
],
"measurementTimes":10,
"url":""
},
... and so on for other entries
]
}
Here is what I saw in the R console:
> summary(json_data)
Length Class Mode
root 5 -none- list
And entering the variable name json_data gave me a dump on the entire JSON structure.

Using 2 worksheets to calculate in sql

I have data that goes back to 2004 to deal with so have to simplify calculations moving from using Excel to using SQL to save processing time & pressure on our servers.
My data is:
Period Employee EmOrg EmType Total Hours Mode
201306 GOVINP1 RSA/PZB/T00 S 180 66
201306 LANDCJ1 RSA/PZB/T00 S 200 35
201306 WOODRE RSA/PZB/T00 S 180 34
201306 MOKOHM1 RSA/JNB/T00 S 160 33
201306 KAPPPJ RSA/PLZ/T00 S 160 32
201306 CAHISJ RSA/PZB/T00 S 187 31
201306 ZEMUN RSA/PZB/T00 S 180 31
201306 SAULDD1 RSA/PZB/T00 S 190 28
201306 JEROP1 RSA/DUR/T00 S 188 26
201306 NGOBS1 RSA/PZB/T00 S 204 24
201306 ZONDNS2 RSA/PZB/T00 S 192 23
201306 DLAMMP RSA/PZB/T00 S 201 23
201306 MPHURK RSA/PLZ/T00 S 160 22
201306 MNDAMB RSA/PZB/T00 S 188 21
My desired outcome is:
Period EmOrg EmType TotalHours FTE S
201308 RSA/BFN/T00 S 198 1
201308 RSA/CPT/T00 S 744 3.757575
201308 RSA/DUR/T00 S 805 4.065656
201308 RSA/JNB/T00 S 396 2
201308 RSA/PLZ/T00 S 563 2.843434
201308 RSA/PTA/T00 S 594 3
201308 RSA/PZB/T00 S 4882 24.656565
And my query:
SELECT
LD.Period,
LD.EmOrg,
LD.EmType,
Sum(LD.RegHrs) AS 'Total Hours',
Sum(LD.RegHrs) / 198 As 'FTE_S'
FROM
SSI.dbo.LD LD
GROUP BY LD.Period , LD.EmOrg , LD.EmType
HAVING (LD.EmOrg Like '%T00')
AND (LD.EmType = 'S')
How do I refer to a column in a different worksheet to use as my Mode rather than dividing with an actual number? Because different months have a different mode and using an actual number will give wrong output in other months.
You need to create a separate table for your Mode per month and than use JOIN to get that value and use it.
Something like this:
SELECT
LD.Period,
LD.EmOrg,
LD.EmType,
Sum(LD.RegHrs) AS 'Total Hours',
Sum(LD.RegHrs) / M.Mode As 'FTE_S'
FROM
SSI.dbo.LD LD
INNER JOIN SSI.dbo.Mode M
ON LD.Period = M.Period -- Not sure its should be Period or Month
GROUP BY LD.Period , LD.EmOrg , LD.EmType
HAVING (LD.EmOrg Like '%T00')
AND (LD.EmType = 'S')