how to get all data from csv into jmeter? - csv

I have a table on the page with maximum of 20 rows and also I created a CSV file with 20 rows in it.
Current Scenario : Jmeter is picking only one row for one user from csv file.
Required: I need jmeter to read all rows (i.e 20) for one user.
Table:
Name1 ---- roll1 --- PhnNumber1 --- date1 --- month1 --- year1 --- Dept1
Name2 ---- roll2 --- PhnNumber2 --- date2 --- month2 --- year2 --- Dept2
Name3 ---- roll3 --- PhnNumber3 --- date3 --- month3 --- year3 --- Dept3
.
.
.
Name20 ---- roll20 --- PhnNumber20 --- date20 --- month20 --- year20 --- Dept20
CSV file:
Name --- roll --- PhnNumber --- date --- month --- year --- Dept
1.
2.
.
.
20.
So I do I configure Jmeter to read all 20 data from csv file for one user.

It's not clear in which format you want to ingest the contents of the CSV file.
Given your CSV file looks like:
Name1,roll1,PhnNumber1,date1,month1,year1,Dept1
Name2,roll2,PhnNumber2,date2,month2,year2,Dept2
Name3,roll3,PhnNumber3,date3,month3,year3,Dept3
If you need it fully inline or in a JMeter Variable (hopefully this is what you're looking for) - there is __FileToString() function
If you want individual values you can go for __CSVRead() function
${__CSVRead(test.csv,0)} - will give you Name1
${__CSVRead(test.csv,1)} - will give you roll1
${__CSVRead(test.csv,next)} - will proceed to the next row
If you want something custom - there is __groovy() function where you have the full freedom regarding how to read the file and where and how to store the results
More information on JMeter Functions concept: Apache JMeter Functions - An Introduction

Related

How to join or unionize a list of list elements (unknown length)

I'm parsing some values using json_query. After extracting the values I'm left with a list of elements which contain the list of values. My goal is to have a single list of un-nested values.
How can I achieve this?
E.g.:
my_list: [ [1,2],[3,4],[5,6] ]
Should become
my_list: [1,2,3,4,5,6]
I can't use my_list[0] | union([my_list[1]) | union(my_list[2]) because the my_list is dynamic.
Use the flatten filter.
Given:
- debug:
msg: "{{ [ [1,2],[3,4],[5,6] ] | flatten(1) }}"
This yields the expected list:
ok: [localhost] =>
msg:
- 1
- 2
- 3
- 4
- 5
- 6
And since you state that you are using json_query, note that there is also a way to flatten in JMESPath, called flatten projection, so you might bake this is your existing query.
As an example:
- debug:
msg: "{{ [ [1,2],[3,4],[5,6] ] | json_query('[]') }}"
Will also yield the expected result.
You can use a custom plugin to handle Python-like jobs easily. To do this, create a folder named filter_plugins (make sure to use this reserved name) in the same folder as your playbook, and add your Python filter there.
$ tree
├── nested_list.yml
├── filter_plugins
│ └── nested_union.py
└── inventory
Make sure the filter contains the FilterModule class and filters method:
$ cat nested_union.py
class FilterModule(object):
def filters(self):
return {
'nested_union': self.nested_union,
}
def nested_union(self, nested_list):
return [x for inner_list in nested_list for x in inner_list]
Call the new filter from your Ansible playbook:
---
- name:
hosts: local
tasks:
- name: Union merged lists
vars:
my_list: [ [1,2],[3,4],[5,6] ]
set_fact:
new_list: "{{ my_list | nested_union }}"
...
Here is the inventory file, just for reference and to complete the example:
[local]
127.0.0.1 ansible_connection=local
And here is the result of the execution:
$ ansible-playbook -i inventory nested_list.yml -v
-- snip --
TASK [Union merged lists]
ok: [127.0.0.1] => {"ansible_facts": {"new_list": [1, 2, 3, 4, 5, 6]}, "changed": false}

Influxdb : CSV import problem, missing values

This is my first post on Stack OverFlow and I hope to do it properly.
I'm new to influxdb and telegraf, but as part of a project, I want to import metrics, which come from network equipment and are exported by a network manager via .csv file to influxdb.
I get:
#IP :
2020-05-YY :
CSV1
CSV2
2020-05-ZZ
...
The structure of the csv file is as follows:
TimeStamp, NetworkElement_Name, Typeofmeasurement, [depends on the measurement, but in the following example it will be the memory of each card.]
Here is an example :
TimeStamp,NetworkElement_Name,Typeofmeasurement,Object,memAbsoluteUsage
2020-05-05T20:00:00+02:00,router1,CPU/Memory Usage,card1,1075
2020-05-05T20:00:00+02:00,router1,CPU/Memory Usage,card2,832
This file exists twice for the same timestamp but with a different "NetworkElement_Name", "Object" and value.
As far as telegraf is concerned, I created a ".conf" file, for each imported CSV, as follows :
[[inputs.file]]
files = ["/metric/data/data/clean_data/**/**/router_cpuMemUsage.csv"]
data_format = "csv"
csv_header_row_count = 1
csv_skip_rows = 0
csv_skip_columns = 0
csv_delimiter = ","
csv_column_types = ["string","string","string","string","int"]
csv_measurement_column = "Typeofmeasurement"
csv_timestamp_column = "TimeStamp"
csv_timestamp_format = "2006-01-02T15:04:05-07:00"
[[outputs.influxdb]]
database = "router Metrics"
And the data seems imported... but, I realize that some values are missing...
I have difficulty understanding / explaining the problem.
But I can't get all the values recorded at a specific time.
the return request :
> SELECT * FROM "CPU/Memory Usage" WHERE "NE Name" =~ /router1/ ORDER BY DESC LIMIT 5
name: CPU/Memory Usage
time NE Name Object ID Object Type Time Stamp memUsage
---- ------- --------- ----------- ---------- ----------
2020-05-07T06:45:00Z router1 card1 CPU/Memory Usage 2020-05-07T08:45:00+02:00 1075
2020-05-07T06:30:00Z router1 card1 CPU/Memory Usage 2020-05-07T08:30:00+02:00 1075
2020-05-07T06:15:00Z router1 card1 CPU/Memory Usage 2020-05-07T08:15:00+02:00 1075
2020-05-07T06:00:00Z router1 card1 CPU/Memory Usage 2020-05-07T08:00:00+02:00 1075
2020-05-07T05:45:00Z router1 card1 CPU/Memory Usage 2020-05-07T07:45:00+02:00 1075
I have only the information of the card 1, not the one of the 2 and if I remove the "WHERE" clause, for the same "TIMESTAMP", I don't have all the information, I miss the information on the router 2.
The values present for one router at a given "TIMESTAMP" will not be present for the other.
I have trouble understanding where the problem can be prevented.
If one of you has an idea :)

Assign puppet Hash to hieradata yaml

I want to assign a hash variable from puppet to a hiera data structure but i only get a string.
Here is a example to illustrate, what I want. Finaly I don't want to access a fact.
1 ---
2 filesystems:
3 - partitions: "%{::partitions}"
And here is my debug code:
1 $filesystemsarray = lookup('filesystems',Array,'deep',[])
2 $filesystems = $filesystemsarray.map | $fs | {
3 notice("fs: ${fs['partitions']}")
4 }
5
6 notice("sda1: ${filesystemsarray[0]['partitions']['/dev/sda1']}")
The map leads to the following output:
Notice: Scope(Class[Profile::App::Kms]): fs: {"/dev/mapper/localhost--vg-root"=>{"filesystem"=>"ext4", "mount"=>"/", "size"=>"19.02 GiB", "size_bytes"=>20422066176, "uuid"=>"02e2ba2c-2ee4-411d-ac63-fc963c8026b4"}, "/dev/mapper/localhost--vg-swap_1"=>{"filesystem"=>"swap", "size"=>"512.00 MiB", "size_bytes"=>536870912, "uuid"=>"95ba4b2a-7434-48fd-9331-66443c752a9e"}, "/dev/sda1"=>{"filesystem"=>"ext2", "mount"=>"/boot", "partuuid"=>"de90a5ed-01", "size"=>"487.00 MiB", "size_bytes"=>510656512, "uuid"=>"398f2ab6-a7e8-4983-bd81-db03984fbd0e"}, "/dev/sda2"=>{"size"=>"1.00 KiB", "size_bytes"=>1024}, "/dev/sda5"=>{"filesystem"=>"LVM2_member", "partuuid"=>"de90a5ed-05", "size"=>"19.52 GiB", "size_bytes"=>20961034240, "uuid"=>"wLKRQm-9bdn-mHA8-M8bE-NL76-Gmas-L7Gp0J"}}
Seem to be a Hash as expected but the notice in Line 6 leads to:
Error: Evaluation Error: A substring operation does not accept a String as a character index. Expected an Integer at ...
What's my fault?

Importing/Conditioning a file.txt with a "kind" of json structure in R

I wanted to import a .txt file in R but the format is really special and it's looks like a json format but I don't know how to import it. There is an example of my data:
{"datetime":"2015-07-08 09:10:00","subject":"MMM","sscore":"-0.2280","smean":"0.2593","svscore":"-0.2795","sdispersion":"0.375","svolume":"8","sbuzz":"0.6026","lastclose":"155.430000000","companyname":"3M Company"},{"datetime":"2015-07-07 09:10:00","subject":"MMM","sscore":"0.2977","smean":"0.2713","svscore":"-0.7436","sdispersion":"0.400","svolume":"5","sbuzz":"0.4895","lastclose":"155.080000000","companyname":"3M Company"},{"datetime":"2015-07-06 09:10:00","subject":"MMM","sscore":"-1.0057","smean":"0.2579","svscore":"-1.3796","sdispersion":"1.000","svolume":"1","sbuzz":"0.4531","lastclose":"155.380000000","companyname":"3M Company"}
To deal with this is used this code:
test1 <- read.csv("C:/Users/test1.txt", header=FALSE)
## Import as 5 observations (5th is all empty) of 1700 variables
#(in fact 40 observations of 11 variables). In fact when I imported the
#.txt file, it's having one line (5th obs) empty, and 4 lines of data and
#placed next to each other 4 lines of data of 11 variables.
# Get the different lines
part1=test1[1:10]
part2=test1[11:20]
part3=test1[21:30]
part4=test1[31:40]
...
## Remove the empty line (there were an empty line after each)
part1=part1[-5,]
part2=part2[-5,]
part3=part3[-5,]
...
## Rename the columns
names(part1)=c("Date Time","Subject","Sscore","Smean","Svscore","Sdispersion","Svolume","Sbuzz","Last close","Company name")
names(part2)=c("Date Time","Subject","Sscore","Smean","Svscore","Sdispersion","Svolume","Sbuzz","Last close","Company name")
names(part3)=c("Date Time","Subject","Sscore","Smean","Svscore","Sdispersion","Svolume","Sbuzz","Last close","Company name")
...
## Assemble data to have one dataset
data=rbind(part1,part2,part3,part4,part5,part6,part7,part8,part9,part10)
## Formate Date Time
times <- as.POSIXct(data$`Date Time`, format='{datetime:%Y-%m-%d %H:%M:%S')
data$`Date Time` <- times
## Keep only the Date
data$Date <- as.Date(times)
## Formate data - Remove text
data$Subject <- gsub("subject:", "", data$Subject)
data$Sscore <- gsub("sscore:", "", data$Sscore)
...
So My code is working to reinstate the data but it's maybe very difficult and more long I know there is better ways to do it, so if you could help me with that I would be very grateful.
There are many packages that read JSON, e.g. rjson, jsonlite, RJSONIO (they will turn in up a google search) - just pick one and give it a go.
e.g.
library(jsonlite)
json.text <- '{"datetime":"2015-07-08 09:10:00","subject":"MMM","sscore":"-0.2280","smean":"0.2593","svscore":"-0.2795","sdispersion":"0.375","svolume":"8","sbuzz":"0.6026","lastclose":"155.430000000","companyname":"3M Company"},{"datetime":"2015-07-07 09:10:00","subject":"MMM","sscore":"0.2977","smean":"0.2713","svscore":"-0.7436","sdispersion":"0.400","svolume":"5","sbuzz":"0.4895","lastclose":"155.080000000","companyname":"3M Company"},{"datetime":"2015-07-06 09:10:00","subject":"MMM","sscore":"-1.0057","smean":"0.2579","svscore":"-1.3796","sdispersion":"1.000","svolume":"1","sbuzz":"0.4531","lastclose":"155.380000000","companyname":"3M Company"}'
x <- fromJSON(paste0('[', json.text, ']'))
datetime subject sscore smean svscore sdispersion svolume sbuzz lastclose companyname
1 2015-07-08 09:10:00 MMM -0.2280 0.2593 -0.2795 0.375 8 0.6026 155.430000000 3M Company
2 2015-07-07 09:10:00 MMM 0.2977 0.2713 -0.7436 0.400 5 0.4895 155.080000000 3M Company
3 2015-07-06 09:10:00 MMM -1.0057 0.2579 -1.3796 1.000 1 0.4531 155.380000000 3M Company
I paste the '[' and ']' around your JSON because you have multiple JSON elements (the rows in the dataframe above) and for this to be well-formed JSON it needs to be an array, i.e. [ {...}, {...}, {...} ] rather than {...}, {...}, {...}.

Fullpath of current TCL script

Is there a possibility to get the full path of the currently executing TCL script?
In PHP it would be: __FILE__
Depending on what you mean by "currently executing TCL script", you might actually seek info script, or possibly even info nameofexecutable or something more esoteric.
The correct way to retrieve the name of the file that the current statement resides in, is this (a true equivalent to PHP/C++'s __FILE__):
set thisFile [ dict get [ info frame 0 ] file ]
Psuedocode (how it works):
set thisFile <value> : sets variable thisFile to value
dict get <dict> file : returns the file value from a dict
info frame <#> : returns a dict with information about the frame at the specified stack level (#), and 0 will return the most recent stack frame
NOTICE: See end of post for more information on info frame.
In this case, the file value returned from info frame is already normalized, so file normalize <path> in not needed.
The difference between info script and info frame is mainly for use with Tcl Packages. If info script was used in a Tcl file that was provided durring a package require (require package <name>), then info script would return the path to the currently executing Tcl script and would not provide the actual name of the Tcl file that contained the info script command; However, the info frame example provided here would correctly return the file name of the file that contains the command.
If you want the name of the script currently being evaluated, then:
set sourcedScript [ info script ]
If you want the name of the script (or interpreter) that was initially invoked, then:
set scriptAtInvocation $::argv0
If you want the name of the executable that was initially invoked, then:
set exeAtInvocation [ info nameofexecutable ]
UPDATE - Details about: info frame
Here is what a stacktrace looks like within Tcl. The frame_index is the showing us what info frame $frame_index looks like for values from 0 through [ info frame ].
Calling info frame [ info frame ] is functionally equivalent to info frame 0, but using 0 is of course faster.
There are only actually 1 to [ info frame ] stack frames, and 0 behaves like [ info frame ]. In this example you can see that 0 and 5 (which is [ info frame ]) are the same:
frame_index: 0 | type = source | proc = ::stacktrace | line = 26 | level = 0 | file = /tcltest/stacktrace.tcl | cmd = info frame $frame_counter
frame_index: 1 | type = source | line = 6 | level = 4 | file = /tcltest/main.tcl | cmd = a
frame_index: 2 | type = source | proc = ::a | line = 2 | level = 3 | file = /tcltest/a.tcl | cmd = b
frame_index: 3 | type = source | proc = ::b | line = 2 | level = 2 | file = /tcltest/b.tcl | cmd = c
frame_index: 4 | type = source | proc = ::c | line = 5 | level = 1 | file = /tcltest/c.tcl | cmd = stacktrace
frame_index: 5 | type = source | proc = ::stacktrace | line = 26 | level = 0 | file = /tcltest/stacktrace.tcl | cmd = info frame $frame_counter
See:
https://github.com/Xilinx/XilinxTclStore/blob/master/tclapp/xilinx/profiler/app.tcl#L273
You want $argv0
You can use [file normalize] to get the fully normalized name, too.
file normalize $argv0
file normalize [info nameofexecutable]
seconds after I've posted my question ... lindex $argv 0 is a good starting point ;-)