In https://packer.io/guides/hcl/from-json-v1/, it says
Note: Starting from version 1.5.0 Packer can read HCL2 files.
And my packer is packer_1.5.5_linux_amd64.zip which is suppose to be able to read HCL2 files. However, when I tried it, I got
$ packer build -only=docker hcl-example
Failed to parse template: Error parsing JSON: invalid character '#' looking for beginning of value
At line 1, column 1 (offset 1):
1: #
^
==> Builds finished but no artifacts were created.
$ packer build -h
Usage: packer build [options] TEMPLATE
Will execute multiple builds in parallel as defined in the template.
The various artifacts created by the template will be outputted.
Options:
-color=false Disable color output. (Default: color)
-debug Debug mode enabled for builds.
-except=foo,bar,baz Run all builds and post-procesors other than these.
-only=foo,bar,baz Build only the specified builds.
-force Force a build to continue if artifacts exist, deletes existing artifacts.
-machine-readable Produce machine-readable output.
-on-error=[cleanup|abort|ask] If the build fails do: clean up (default), abort, or ask.
-parallel=false Disable parallelization. (Default: true)
-parallel-builds=1 Number of builds to run in parallel. 0 means no limit (Default: 0)
-timestamp-ui Enable prefixing of each ui output with an RFC3339 timestamp.
-var 'key=value' Variable for templates, can be used multiple times.
-var-file=path JSON file containing user variables. [ Note that even in HCL mode this expects file to contain JSON, a fix is comming soon ]
and I don't see any switches from above to switch to HCL2 mode.
What I'm missing here?
$ packer version
Packer v1.5.5
$ cat hcl-example
# the source block is what was defined in the builders section and represents a
# reusable way to start a machine. You build your images from that source.source
"amazon-ebs" "example" {
ami_name = "packer-test"
region = "us-east-1"
instance_type = "t2.micro"
}
[UPDATE:]
To address Matt's comment/concern, I've changed the content of hcl-example to the whole list in https://packer.io/guides/hcl/from-json-v1/, and
mv hcl-example hcl-example.hcl
$ packer validate hcl-example.hcl
Failed to parse template: Error parsing JSON: invalid character '#' looking for beginning of value
At line 1, column 1 (offset 1):
1: #
^
Named it with .pkr.hcl extension solved the problem.
Related
I am trying to download openjdk8 source code from Mercurial repository using
hg clone http://hg.openjdk.java.net/jdk8/jdk8 openJDK8
Am getting the below error:
abort: error: node name or service name not known
If we add the ipaddress and hostname to /etc/hosts file, will it get resolved.
But i dont know how to find the ipaddress and hostname of http://hg.openjdk.java.net.
From another S10 system, i could be able to download the source. i checked /etc/hosts and /etc/resolve.conf. Both are same. When i copy the downloaded source to my system and tried to build it in my system, am getting some timestamp error in hotfolder:
WARNING: You are using cc version 5.13 and should be using version 5.10.
Set ENFORCE_CC_COMPILER_REV=5.13 to avoid this warning.
/opt/csw/bin//gmake: invalid option -- /
/opt/csw/bin//gmake: invalid option -- c
/opt/csw/bin//gmake: invalid option -- c
/opt/csw/bin//gmake: invalid option -- 8
/opt/csw/bin//gmake: invalid option -- /
/opt/csw/bin//gmake: invalid option -- a
/opt/csw/bin//gmake: invalid option -- /
/opt/csw/bin//gmake: invalid option -- c
Usage: gmake [options] [target] ...
Options:
-b, -m Ignored for compatibility.
-B, --always-make Unconditionally make all targets.
-C DIRECTORY, --directory=DIRECTORY
Change to DIRECTORY before doing anything.
-d Print lots of debugging information.
--debug[=FLAGS] Print various types of debugging information.
-e, --environment-overrides
Environment variables override makefiles.
-E STRING, --eval=STRING Evaluate STRING as a makefile statement.
-f FILE, --file=FILE, --makefile=FILE
Read FILE as a makefile.
-h, --help Print this message and exit.
-i, --ignore-errors Ignore errors from recipes.
-I DIRECTORY, --include-dir=DIRECTORY
Search DIRECTORY for included makefiles.
-j [N], --jobs[=N] Allow N jobs at once; infinite jobs with no arg.
-k, --keep-going Keep going when some targets can't be made.
-l [N], --load-average[=N], --max-load[=N]
Don't start multiple jobs unless load is below N.
-L, --check-symlink-times Use the latest mtime between symlinks and target.
-n, --just-print, --dry-run, --recon
Don't actually run any recipe; just print them.
-o FILE, --old-file=FILE, --assume-old=FILE
Consider FILE to be very old and don't remake it.
-O[TYPE], --output-sync[=TYPE]
Synchronize output of parallel jobs by TYPE.
-p, --print-data-base Print make's internal database.
-q, --question Run no recipe; exit status says if up to date.
-r, --no-builtin-rules Disable the built-in implicit rules.
-R, --no-builtin-variables Disable the built-in variable settings.
-s, --silent, --quiet Don't echo recipes.
--no-silent Echo recipes (disable --silent mode).
-S, --no-keep-going, --stop
Turns off -k.
-t, --touch Touch targets instead of remaking them.
--trace Print tracing information.
-v, --version Print the version number of make and exit.
-w, --print-directory Print the current directory.
--no-print-directory Turn off -w, even if it was turned on implicitly.
-W FILE, --what-if=FILE, --new-file=FILE, --assume-new=FILE
Consider FILE to be infinitely new.
--warn-undefined-variables Warn when an undefined variable is referenced.
This program built for i386-pc-solaris2.10
Report bugs to <bug-make#gnu.org>
gmake[5]: *** [/export/home/preethi/buildopenjdk/check8/hotspot/make/solaris/makefiles/top.make:84: ad_stuff] Error 2
gmake[4]: *** [/export/home/preethi/buildopenjdk/check8/hotspot/make/solaris/Makefile:225: product] Error 2
gmake[3]: *** [Makefile:217: generic_build2] Error 2
gmake[2]: *** [Makefile:167: product] Error 2
gmake[1]: *** [HotspotWrapper.gmk:45: /export/home/preethi/buildopenjdk/check8/build/solaris-x86-normal-server-release/hotspot/_hotspot.timestamp] Error 2
gmake: *** [/export/home/preethi/buildopenjdk/check8//make/Main.gmk:109: hotspot-only] Error 2
Following steps from:
https://hg.openjdk.java.net/jdk8u/jdk8u/raw-file/tip/README-builds.html
System spec:
SunOS pkg.oracle.com 5.10 Generic_150401-16 i86pc i386 i86pc
1) If we add any ipaddress and host in /etc/hosts whether the problem will get resolved?
2) Why the copied source is not working in another S10?
In /etc/hosts Added 137.254.56.60 openjdk.java.net. But same error. From my system am not able to ping openjdk.java.net. no answer from 137.254.56.60. Am new to solaris and not very familiar with proxy settings. Can anyone please help.
I am getting the following error message:
Warning: Environment variable SUMO_HOME is not set, using built in type maps.
Warning: Environment variable SUMO_HOME is not set, schema resolution will use slow website lookups.
Error: unable to open file 'https://sumo.dlr.de/xsd/types_file.xsd'
In file 'built in type map'
At line/column 1/0.
The types could not be loaded from 'built in type map'.
Quitting (on error).
What could be causing this?
Error: unable to open file 'https://sumo.dlr.de/xsd/types_file.xsd'
It's http , not https. ... Please see this site https://sumo.dlr.de/wiki/Networks/PlainXML → $ wget http://sumo.dlr.de/xsd/types_file.xsd
My test (I created a test dir. sumo/TEST_COMMANDS/ with some default files + the "wget downloaded" types_file.xsd):
$ cd sumo/ && export SUMO_HOME="$PWD" && cd TEST_COMMANDS/
$ netconvert --node-files=input_nodes.nod.xml --edge-files=input_edges.edg.xml \
--connection-files=input_connections.con.xml --type-files=types_file.xsd \
--output-file=MySUMONet.net.xml
The terminal reply is : Success. ..... And the file MySUMONet.net.xml 61.4kB is created.
I have a .gitlab-ci.yml the file that I use to install a few plugins (craftcms/aws-s3, craftcms/redactor, etc) in the publishing stage. The file is provided below (partly):
# run the staging deploy, commands may be different baesed on the project
deploy-staging:
stage: publish
variables:
DOCKER_HOST: 127.0.0.1:2375
# ...............
# ...............
# TODO: temporary fix to the docker/composer issue
- docker-compose -p "ci-$CI_PROJECT_ID" --project-directory $CI_PROJECT_DIR -f build/docker-compose.staging.yml exec -T craft composer --working-dir=/data/craft require craftcms/aws-s3
- docker-compose -p "ci-$CI_PROJECT_ID" --project-directory $CI_PROJECT_DIR -f build/docker-compose.staging.yml exec -T craft composer --working-dir=/data/craft require craftcms/redactor
I have a JSON file that has the data for the plugins. The file is .butler.json. provided below,
{
"customer_number": "007",
"project_number": "999",
"site_name": "Welance",
"local_url": "localhost",
"db_driver": "mysql",
"composer_require": [
"craftcms/redactor",
"craftcms/aws-s3",
"nystudio107/craft-typogrify:1.1.17"
],
"local_plugins": [
"welance/zeltinger",
"ansmann/ansport"
]
}
How do I take the plugin names from the "composer_require" and the "local_plugins" inside the .butler.json file and create a for loop in the .gitlab-ci.yml file to install the plugins?
You can't create a loop in .gitlab-ci.yml since YAML is not a programming language. It only describes data. You could use a tool like jq to query for your values (cat .butler.json | jq '.composer_require') inside a script, but you cannot set variables from there (there is a feature request for it).
You could use a templating engine like Jinja (which is often used with YAML, e.g. by Ansible and SaltStack) to generate your .gitlab-ci.yml from a template. There exists a command line tool j2cli which takes variables as JSON input, you could use it like this:
j2 gitlab-ci.yml.j2 .butler.json > .gitlab-ci.yml
You could then use Jinja expression to loop over your data and create corresponding YAML in gitlab-ci.yml.j2:
{% for item in composer_require %}
# build your YAML
{% endfor %}
Drawback is that you need the processed .gitlab-ci.yml checked in to your repository. This can be done via pre-commit-hook (before each commit, regenerate the .gitlab-ci.yml file and if it changed, commit it along with other changes).
I'm running a test with JMeter 2.1.13 on Ubuntu 14.04, getting the output as csv. I use the following command line in Ubuntu 14.04 to try to get it to read the properties file to add fields to the CSV output
./jmeter -n -p /opt/apache-jmeter-2.13/bin/jmeter.properties -l n1.csv -t Apache-DB.jmx
With the following in the properties file
jmeter.save.saveservice.output_format=csv
jmeter.save.saveservice.print_field_names=true
jmeter.save.saveservice.response_code=true
jmeter.save.saveservice.successful=true
jmeter.save.saveservice.latency=true
jmeter.save.saveservice.connect_time=true
jmeter.save.saveservice.bytes=true
jmeter.save.saveservice.default_delimiter=,
It doesn't seem to pick it up, as no field headers are printed. Here's an example from the first line of the csv file
1448233211742,313,HTTP Request,200,OK,Thread Group 1-1,text,false,209666,1,1,96
I've also tried --propfile instead of -p, which didn't work. Am I doing something wrong or does JMeter not read those configuration options like it should?
Background information / helpful information for others
I have managed to turn on a couple of extra fields using command line switches (just in case anyone finds this on Google). This at puts field labels on the JMeter CSV output.
./jmeter -n -Jjmeter.save.saveservice.print_field_names=true -Jjmeter.save.saveservice.connect_time=true -l n1.csv -t Apache-DB.jmx
For reference here are the JMeter default csv fields
timeStamp,elapsed,label,responseCode,responseMessage, threadName,dataType,success,bytes,grpThreads,allThreads,Latency
The header at the top of jmeter.properties advices:
################################################################################
#
# THIS FILE SHOULD NOT BE MODIFIED
#
# This avoids having to re-apply the modifications when upgrading JMeter
# Instead only user.properties should be modified:
# 1/ copy the property you want to modify to user.properties from jmeter.properties
# 2/ Change its value there
#
################################################################################
Your settings are likely being overridden when default saveservice properties are loaded afterjmeter.properties.
Try putting your properties in user.properties.
I am getting this error with GT.M:
%GTM-E-GDINVALID, Unrecognized Global Directory file format: /home/blah/gt.m/example/mumps.gld, expected label: GTCGBDUNX007, found: GTCGBDUNX006
Here is what I did so far:
get the version http://sourceforge.net/projects/fis-gtm/
tar -xzf gtm_V55000_linux_i686_pro.tar.gz
chmod +x semstat2 mupip mumps lke gtmsecshr gtcm_shmclean gtcm_server gtcm_play gtcm_pkdisp gtcm_gnp_server geteuid ftok dse
Now we start like this in Bash:
mkdir example; cd example
...and invoke the mumps from the parent dir:
../mumps -r GDE
The output is this:
%GDE-I-GDUSEDEFS, Using defaults for Global Directory
/home/blah/gt.m/example/mumps.gld
Now we set the working dir to create the gld file.
GDE> change -s DEFAULT -f=/home/blah/gt.m/gt.m/example/
GDE> exit
The output from the command is this :
>%GDE-I-VERIFY, Verification OK
>%GDE-I-GDCREATE, Creating Global Directory file
> /home/blah/gt.m/example/mumps.gld
Now this creates a v6 version of gld, which mupip does not like:
strings mumps.gld | head -1
Which contains this string:
GTCGBDUNX006H
But mupip expects a 7 not a 6!
../mupip create
>%GTM-E-GDINVALID, Unrecognized Global Directory file format: >/home/blah/gt.m/example/mumps.gld, expected label: GTCGBDUNX007, found: GTCGBDUNX006
If I just edit the file and replace the 6 with a 7,
../mupip create.
This works!
Now I have a dat file, and go to gtm to save something :
GTM>s ^foo("blah")=1
%GTM-E-GDINVALID, Unrecognized Global Directory file format: >/home/blah/gt.m/example/mumps.gld, expected label: GTCGBDUNX006, found: GTCGBDUNX007
Oh so that wants a v6, so good thing i backed up the old, one, i replace it .
GTM>s ^foo("blah")=1
that works
GTM>zwr ^foo(*)
>^foo("blah")=1
So the data is stored.
Can anyone please explain this? In detail? Why does mupip operate with a different version number?
Note, I did not run any other commands, I am just learning and don't want to execute any huge install routines a root that I don't understand.
In your steps you don't show whether you installed GT.M or not.
That is only the unziped version, first:
chmod 777 configure
./configure
The installation will produce new files in the gtm_dist directory.
You either have GT.M already installed (and I would guess it is an older version) on your system somewhere else and have some environment variable defined for it in your bash/tcsh/*sh environment, or you didn't provide all the step you did to get to that error.
My guess is that you already have GT.M installed somewhere and your above commands uses part of that installation. You can easily verify this using this command : env | grep gtm.
If I follow your steps mentioned above, I get this result :
laurent#laurent /tmp/test $ tar -zxf ~/Projects/gtm_V55000_linux_i686_pro.tar.gz
laurent#laurent /tmp/test $ chmod +x semstat2 mupip mumps lke gtmsecshr gtcm_shmclean gtcm_server gtcm_play gtcm_pkdisp gtcm_gnp_server geteuid ftok dse
laurent#laurent /tmp/test $ mkdir example; cd example
laurent#laurent /tmp/test/example $ ../mumps -r GDE
%GTM-E-GTMDISTUNDEF, Environment variable $gtm_dist is not defined
So, I as said, you either did something else, or have a different GT.M version already installed and this is why some commands expect different versions of GLD.
As Bhaskar has noted in your cross post on Hardhats. Make sure you follow the installation instructions for GT.M. Instructions can be found in Chapter 2 of the UNIX Administration and Operations Guide