How to just just one packer build - packer

Sorry if this is a silly question but I am new to packer and haven't been able to find the answer.
In my root directory I have 4 files.
be.pkr.hcl
data.pkr.hcl
fe.pkr.hcl
variables.pkr.hcl
I have been running this command so far which builds both of by builds
packer build .
My question is how can I just run one of my builds? I know I can copy my variables and data into each build but is there a better way so I can include them and run this for example
packer build fe.pkr.hcl

You would use the only argument to packer build for this. You can specify the build argument names like -only=amazon-ebs.my_ami if you do not rename the build in your template, or otherwise specify multiple kv pairs within the block.

Related

Programmatically create gitlab-ci.yml file?

Is there any tool to generate .gitlab-ci.yml file like Jenkins has job-dsl-plugin to create jobs?
Jenkins DSL plugin allows me to generate jobs using Groovy, which outputs an xml that describes a job for Jenkins.
I can use DSL and a json file to generate jobs in Jenkins. What I’m looking for is a tool to help me generate .gitlab-ci.yml based on a specification.
The main question i have to ask what is your goal?
just reduce maintenance effort for repeating job snippets:
Sometimes .gitlab-ci.yml file are pretty similar in a lot of projects, and you want to manage them centrally. Then i recommend to take a look at Having Gitlab Projects calling the same gitlab-ci.yml stored in a central location - which shows multiple ways of centralizing your build,
generate pipeline configuration as the build is highly flexible
Actually this is more a templating task, and can be achieved in nearly every script language you like.
With simple bash, groovy, python, go, .. you name it. In the end the question is, what kind of flexibility you strive for, and what kind of logic you need for the generation. I will not go into the detail on how to generate a the .gitlab-ci.yml file, but how to use it for your next step. Because this is in my opinion the most crucial step. There is the way of simply generating and committing it, but you can also use GitLab CI to generate a file for you, which will be used in the next job of your pipeline.
setup:
script:
- echo ".." # generate your yaml file here, maybe use a custom image
artifacts:
paths:
- generated.gitlab-ci.yml
trigger:
needs:
- setup
trigger:
include:
- artifact: generated.gitlab-ci.yml
job: setup
strategy: depend
This allows you to generate a child pipeline and execute it - we use this for highly generic builds in monorepos.
see for further reading
GitLab JSONNET Example - documentation example for generated yml files within a pipeline
Dynamic Childpipelines - documentation for dynamically created pipelines

How to allow web-component-tester to run tests stored with my components

I am experimenting with the framework to build an SPA using polymer. This will include a large number of custom elements at various levels in the overall application hierarchy. I would like to use web-component-tester to run the module tests on them.
web-component-tester seems to be opinionated about where the tests will be stored - in a separate test directory, where it will run all files found.
I am of an opposite opinion. I would like to store tests in the same directory as the element definition. I would like to differentiate tests by naming them xxx.test.html (or possibly xxx.test.js). I also want to run different "sets" of tests controlled by gulp some of which will be watching for changes and then running the tests (for the app side of my project) and some of which will be elements that use core-ajax to unit test my server side scripts. These will more than likely be in a totally different directory hierarchy (my dist directory) and will be served by a proper web server.
I "think" the "suite" config option wct-conf.js file in my project root might be how I can define this, or alternatively a wct command with some file globs. Unfortunately web-component-tester's README is somewhat confusing on any detail and when you have your own web server it says "You'll need to save WCT's browser.js in order to go this route." What does that mean?
Can someone enlighten me on how can get WCT to run each of the elements/**/*.test.html files as its own "suite" ( I actually intend to use describe, it format - but I assume I still need to use the term suite).
Can someone also explain what I need to do the browser.js when I have my own web server.
I ran some experiments and did a bit of debugging with node-inspector. Firstly, the command line overwrites the suites parameter in the config file
wct app/elements**/*.test.html
does find all my module tests if I have them stored with the elements and ignores the contents of the wct.conf.js file's suites parameter.
also putting the same value (ie app/elements/**/*.test.html) in the wct-conf.js file for the suite parameter does the same job. In fact in this mode, gulp test:local
Also works correctly
So to run different tests for module and distribution, I just need to set up for wct.conf.js for my module tests, and set up gulp to run a command line with the correct location of my test file
I still haven't understood the instructions for running with your own web server.

TCL : Regarding source , package , namespace command

I want to know about modular programming in tcl and how we can achieve that .
In some tcl tutorials mention like source command having some drawbacks in achieving "modularity" so that we came to the "package" after that "package" is having some more drawbacks so that we came with the combination of package and namespaces .
I want to know what are the drawbacks and proper hierarchy of 3 concepts . Can Anyone help me ?
I'm not sure if I understand your question correctly, so I'll try to explain the 3 commands you throwed in your question:
source: Evaluates a file as a Tcl script. -
It simply opens the file, reads until the EOF character (^Z on both windows and *nix) and evaluates it.
It does not keep track of sourced files, so you can source the same file again (great for hotpatching), but this is the drawback: It will source the file again.
package: Manages packages. It basically keeps track of the provided packages and tries to figure out which file it has to source to load a new package.
namespace: They provide context for commands and variables, so you don't have to worry about unique names for your commands. Just the namespace has to be unique. Has nothing to do with loading packages or other modules, it just provides namespaces.
I suggest that you use packages, each package in it's own file, each package with a namespace equal to the package name where all commands reside.
You should export the public commands with namespace export.

How can I see the full command actually run by a build system in SublimeText2?

I'm writing my own build system in SublimeText2 but it's not working properly. It would be useful to see the full command that is actually being run to be able to see what's wrong. Is it possible to do that?
When first writing new build rules, I often use "echo" rest of build rule. That will not run your command, but it will print it.

How can I easily save the HTML pages of my Rails app, to give to Designer?

I am working on a Rails app, and a designer is designing the raw HTML pages separately. Its tough to get his environment set up to use the application directly, so I would like to be able to somehow "store" the HTML of all of the pages that my application generates, to a directory somewhere to that I can pass the curent version off to the designer.
Does anyone know of a gem or rake task that would help me do something likee this?
I am also open to other suggestions for working in parallel with designers who don't know rails.
Thanks
Edit
I guess an amendment to my question, would be, does anyone also know of ways of generating the list of page links to feed to wget, other than going through them by hand
Edit 2
Just thinking out loud... to generate every possible page in an app, you'd need to call every action in every controller. So i'd need a program to find which controllers exist in all of my app/gems/plugins, and then find all of the public methods in them.. Or.. maybe I could just use the actions that are routable from the list of routes
Then, you might want to filter out the actions that didn't render html
Then you might want to filter out destructive actions (unless this program ran in a test environment, and rebuilt the system every time).
Then as many actions depend on the parameters that are supplied, you'd need to have control over which parameters are sent to each action...
Then you'd also have to be able to send session cookies to log in
what else..
wget -m http://somewhere.com
This command will fetch all the files / pages from http://somewhere.com and download them to a local directory, to form a local "mirror."
-m
--mirror
Turn on options suitable for mirroring. This option turns
on recursion and time-stamping, sets infinite recursion depth and
keeps FTP directory listings. It is currently equivalent to -r -N -l
inf --no-remove-listing.
Note: I don't believe Mac OS X ships with wget. If you are using a Mac, I'd suggest installing Homebrew and then running brew install wget.
Read more: man wget