JMeter 2.1.13 not loading properties file - csv

I'm running a test with JMeter 2.1.13 on Ubuntu 14.04, getting the output as csv. I use the following command line in Ubuntu 14.04 to try to get it to read the properties file to add fields to the CSV output
./jmeter -n -p /opt/apache-jmeter-2.13/bin/jmeter.properties -l n1.csv -t Apache-DB.jmx
With the following in the properties file
jmeter.save.saveservice.output_format=csv
jmeter.save.saveservice.print_field_names=true
jmeter.save.saveservice.response_code=true
jmeter.save.saveservice.successful=true
jmeter.save.saveservice.latency=true
jmeter.save.saveservice.connect_time=true
jmeter.save.saveservice.bytes=true
jmeter.save.saveservice.default_delimiter=,
It doesn't seem to pick it up, as no field headers are printed. Here's an example from the first line of the csv file
1448233211742,313,HTTP Request,200,OK,Thread Group 1-1,text,false,209666,1,1,96
I've also tried --propfile instead of -p, which didn't work. Am I doing something wrong or does JMeter not read those configuration options like it should?
Background information / helpful information for others
I have managed to turn on a couple of extra fields using command line switches (just in case anyone finds this on Google). This at puts field labels on the JMeter CSV output.
./jmeter -n -Jjmeter.save.saveservice.print_field_names=true -Jjmeter.save.saveservice.connect_time=true -l n1.csv -t Apache-DB.jmx
For reference here are the JMeter default csv fields
timeStamp,elapsed,label,responseCode,responseMessage, threadName,dataType,success,bytes,grpThreads,allThreads,Latency

The header at the top of jmeter.properties advices:
################################################################################
#
# THIS FILE SHOULD NOT BE MODIFIED
#
# This avoids having to re-apply the modifications when upgrading JMeter
# Instead only user.properties should be modified:
# 1/ copy the property you want to modify to user.properties from jmeter.properties
# 2/ Change its value there
#
################################################################################
Your settings are likely being overridden when default saveservice properties are loaded afterjmeter.properties.
Try putting your properties in user.properties.

Related

Setting up a CSV Data Adapter locally

I am trying to set up the Data Visualization extension to use data from csv file for the sensors based on this example:
https://forge.autodesk.com/en/docs/dataviz/v1/developers_guide/advanced_topics/csv_adapter/
So the csv data I am trying to use is the default Hyperion-1.csv in folder server\gateways\csv. Do I need to add/change some other settings as well?
It is showing the following error in Chrome console:
I have these settings for the csv in .env file.
And these in devices.json in server\gateways\synthetic-data folder.
I've just taken the following steps to enable the CSV data adapter which seemed to work fine:
Clone the repo: git clone https://github.com/Autodesk-Forge/forge-dataviz-iot-reference-app
Install dependencies: npm install
Create a copy of server/env_template and rename it to server/.env
Modify the contents of server/.env, commenting out all the initial env. variables, uncommenting the CSV-related env. vars, and setting their corresponding values:
# FORGE_CLIENT_ID=
# FORGE_CLIENT_SECRET=
# FORGE_ENV=AutodeskProduction
# FORGE_API_URL=https://developer.api.autodesk.com
# FORGE_CALLBACK_URL=http://localhost:9000/oauth/callback
#
# FORGE_BUCKET=
# ENV=local
# ADAPTER_TYPE=local
## Please uncomment the following part if you want to connect to Azure IoTHub and Time Series Insights
## Connect to Azure IoTHub and Time Series Insights
# ADAPTER_TYPE=azure
# AZURE_IOT_HUB_CONNECTION_STRING=
# AZURE_TSI_ENV=
#
## Azure Service Principle
# AZURE_CLIENT_ID=
# AZURE_APPLICATION_SECRET=
# AZURE_TENANT_ID=
# AZURE_SUBSCRIPTION_ID=
#
## Path to Device Model configuration File
# DEVICE_MODEL_JSON=
## End - Connect to Azure IoTHub and Time Series Insights
## Please uncomment the following part if you want to use a CSV file as the time series provider
ADAPTER_TYPE=csv
CSV_MODEL_JSON=server/gateways/synthetic-data/device-models.json
CSV_DEVICE_JSON=server/gateways/synthetic-data/devices.json
CSV_FOLDER=server/gateways/csv/
CSV_DATA_START=2011-02-01T08:00:00.000Z
CSV_DATA_END=2011-02-20T13:51:10.511Z
CSV_DELIMITER="\t"
CSV_LINE_BREAK="\n"
CSV_TIMESTAMP_COLUMN="time"
CSV_FILE_EXTENSION=".csv"
## End - Please uncomment the following part if you want to use a CSV file as the time series provider
Run the app with ENV set to "local": ENV=local npm run dev
After these steps the app is running successfully, however you'll get some other errors because the server/gateways/csv folder only contains data for a single sensor (Hyperion-1).
Btw. I've been working on an alternative DataViz sample app that aims to be simpler and easier to reuse: https://github.com/petrbroz/forge-iot-extensions-demo (which uses https://github.com/petrbroz/forge-iot-extensions under the hood).

Packer HCL2 config file support

In https://packer.io/guides/hcl/from-json-v1/, it says
Note: Starting from version 1.5.0 Packer can read HCL2 files.
And my packer is packer_1.5.5_linux_amd64.zip which is suppose to be able to read HCL2 files. However, when I tried it, I got
$ packer build -only=docker hcl-example
Failed to parse template: Error parsing JSON: invalid character '#' looking for beginning of value
At line 1, column 1 (offset 1):
1: #
^
==> Builds finished but no artifacts were created.
$ packer build -h
Usage: packer build [options] TEMPLATE
Will execute multiple builds in parallel as defined in the template.
The various artifacts created by the template will be outputted.
Options:
-color=false Disable color output. (Default: color)
-debug Debug mode enabled for builds.
-except=foo,bar,baz Run all builds and post-procesors other than these.
-only=foo,bar,baz Build only the specified builds.
-force Force a build to continue if artifacts exist, deletes existing artifacts.
-machine-readable Produce machine-readable output.
-on-error=[cleanup|abort|ask] If the build fails do: clean up (default), abort, or ask.
-parallel=false Disable parallelization. (Default: true)
-parallel-builds=1 Number of builds to run in parallel. 0 means no limit (Default: 0)
-timestamp-ui Enable prefixing of each ui output with an RFC3339 timestamp.
-var 'key=value' Variable for templates, can be used multiple times.
-var-file=path JSON file containing user variables. [ Note that even in HCL mode this expects file to contain JSON, a fix is comming soon ]
and I don't see any switches from above to switch to HCL2 mode.
What I'm missing here?
$ packer version
Packer v1.5.5
$ cat hcl-example
# the source block is what was defined in the builders section and represents a
# reusable way to start a machine. You build your images from that source.source
"amazon-ebs" "example" {
ami_name = "packer-test"
region = "us-east-1"
instance_type = "t2.micro"
}
[UPDATE:]
To address Matt's comment/concern, I've changed the content of hcl-example to the whole list in https://packer.io/guides/hcl/from-json-v1/, and
mv hcl-example hcl-example.hcl
$ packer validate hcl-example.hcl
Failed to parse template: Error parsing JSON: invalid character '#' looking for beginning of value
At line 1, column 1 (offset 1):
1: #
^
Named it with .pkr.hcl extension solved the problem.

How to run a cypher script file from Terminal with the cypher-shell neo4j command?

I have a cypher script file and I would like to run it directly.
All answers I could find on SO to the best of my knowledge use the command neo4j-shell which in my version (Neo4j server 3.5.5) seems to be deprecated and substituted with the command cyphershell.
Using the command sudo ./neo4j-community-3.5.5/bin/cypher-shell --help I got the following instructions.
usage: cypher-shell [-h] [-a ADDRESS] [-u USERNAME] [-p PASSWORD]
[--encryption {true,false}]
[--format {auto,verbose,plain}] [--debug] [--non-interactive] [--sample-rows SAMPLE-ROWS]
[--wrap {true,false}] [-v] [--driver-version] [--fail-fast | --fail-at-end] [cypher]
A command line shell where you can execute Cypher against an
instance of Neo4j. By default the shell is interactive but you can
use it for scripting by passing cypher directly on the command
line or by piping a file with cypher statements (requires Powershell
on Windows).
My file is the following which tries to create a graph from csv files and it comes from the book "Graph Algorithms".
WITH "https://github.com/neo4j-graph-analytics/book/raw/master/data" AS base
WITH base + "transport-nodes.csv" AS uri
LOAD CSV WITH HEADERS FROM uri AS row
MERGE (place:Place {id:row.id})
SET place.latitude = toFloat(row.latitude),
place.longitude = toFloat(row.latitude),
place.population = toInteger(row.population)
WITH "https://github.com/neo4j-graph-analytics/book/raw/master/data/" AS base
WITH base + "transport-relationships.csv" AS uri
LOAD CSV WITH HEADERS FROM uri AS row
MATCH (origin:Place {id: row.src})
MATCH (destination:Place {id: row.dst})
MERGE (origin)-[:EROAD {distance: toInteger(row.cost)}]->(destination)
When I try to pass the file directly with the command:
sudo ./neo4j-community-3.5.5/bin/cypher-shell neo_4.cypher
first it asks for username and password but after typing the correct password (the wrong password results in the error The client is unauthorized due to authentication failure.) I get the error:
Invalid input 'n': expected <init> (line 1, column 1 (offset: 0))
"neo_4.cypher"
^
When I try piping with the command:
sudo cat neo_4.cypher| sudo ./neo4j-community-3.5.5/bin/cypher-shell -u usr -p 'pwd'
no output is generated and no graph either.
How to run a cypher script file with the neo4j command cypher-shell?
Use cypher-shell -f yourscriptname. Check with --help for more description.
I think the key is here:
cypher-shell -- help
... Stuff deleted
positional arguments:
cypher an optional string of cypher to execute and then exit
This means that the paremeter is actual cypher code, not a file name. Thus, this works:
GMc#linux-ihon:~> cypher-shell "match(n) return n;"
username: neo4j
password: ****
+-----------------------------+
| n |
+-----------------------------+
| (:Job {jobName: "Job01"}) |
| (:Job {jobName: "Job02"}) |
But this doesn't (because the text "neo_4.cypher" isn't a valid cypher query)
cypher-shell neo_4.cypher
The help also says:
example of piping a file:
cat some-cypher.txt | cypher-shell
So:
cat neo_4.cypher | cypher-shell
should work. Possibly your problem is all of the sudo's. Specifically the cat ... | sudo cypher-shell. It is possible that sudo is protecting cypher-shell from some arbitrary input (although it doesn't seem to do so on my system).
If you really need to use sudo to run cypher, try using the following:
sudo cypher-shell arguments_as_needed < neo_4.cypher
Oh, also, your script doesn't have a return, so it probably won't display any data, but you should still see the summary reports of records loaded.
Perhaps try something simpler first such as a simple match ... return ... query in your script.
Oh, and don't forget to terminate the cypher query with a semi-colon!
The problem is in the cypher file: each line should end with a semicolon: ;. I still need sudo to run the program.
The file taken from the book seems to contain other errors as well actually.

Failed loading positionFile: while using TAILDIR Source in flume i am getting error

I working on Flume to append the data from a local directory to HDFS using Flume Source TAILDIR.
My use case is to do Delta Load If the new line comes in the source file in local dir so that will append in hdfs.
This is my Flume Conf file :
#configure the agent
agent.sources=r1
agent.channels=k1
agent.sinks=c1
agent.sources.r1.type=TAILDIR
agent.sources.r1.positionFile = /home/flume/Documents/taildir_position.json
agent.sources.r1.filegroups=f1
agent.sources.r1.filegroups.f1=/home/flume/Documents/spooldir/
agent.sources.r1.batchSize = 20
agent.sources.r1.writePosInterval=2000
agent.sources.r1.maxBackoffSleep=5000
agent.sources.r1.fileHeader = true
agent.sources.r1.channels=k1
agent.channels.k1.type=memory
agent.channels.k1.capacity=10000
agent.channels.k1.transactionCapacity=1000
agent.sinks.c1.type=hdfs
agent.sinks.c1.channel=k1
agent.sinks.c1.hdfs.path=hdfs://localhost:8020/flume_sink
agent.sinks.c1.hdfs.batchSize = 1000
agent.sinks.c1.hdfs.rollSize = 268435456
agent.sinks.c1.hdfs.writeFormat=Text
while running flume command : flume-ng agent -n agent -c conf -f /home/swechchha/Documents/flumereal.conf
I am getting error
I am getting error to load JSON file.
Here is the code. It crashes at the line 110. Please make sure that flume user has access to that JSON file and that the file is correctly formatted.
The Flume.conf mentioned in Question Statement is having a problem.
TAILDIR SOURCE: Watch the specified files, and tail them in nearly real-time once detected new lines appended to each files. If the new lines are being written, this source will retry reading them in wait for the completion of the write.
While writing filegroups property directory may contain multiple files in this case it should be mentioned like directory path/ .filestype.
agent.sources.r1.filegroups.f1=/home/flume/Documents/spooldir/.*txt.*
Then run flume.conf and check the result it will work fine.

How do I set user level flags for Grid Engine bsub command?

I am running GridEngine (GE 6.2u5) jobs from a command line. For example,
qsub echo "Hello"
But I get this error,
Unable to read script file because of error: error opening echo: No such file or directory
The workaround is easy, use the -b y flag. I'd like to create an SGE properties file in my home directory which will set '-y' to be the default. How do I do this?
If you want to add your option, you can edit the file "sge_request". It allows you to set the default options that will be added to any requests you will submit.
This file is situated in : SGE_ROOT/CELL_NAME/common/sge_request
For more information, check the documentation : http://gridscheduler.sourceforge.net/htmlman/htmlman5/sge_request.html