Jenkins key-value variable or using a variable to define another - hudson

I'm setting up a centralized build system where several projects get "packaged". The project can be selected using a choice parameter, and based on it I'm building the SVN checkout path. The repository is the same, but each app has a slightly different path, eg:
app 1 resides in ../libs/App1..
app 2 resides in ../tools/App2..
etc
My problem is that I need to somehow match the app name with its location.
The first thought that came to mind was a key-value parameter but I have yet to find a plugin that permits it.
The second one was to define some environmental variables within a file (there are 2-3 plugins that do it) and then use the value selected in the choice parameter to as the key for the env var. Is this achievable in any way?
Kind regards

You can use Conditional BuildStep Plugin together with EnvInject Plugin in order to set up an environment variable (say APP_PATH) that depends on your build parameters before any other build steps. You then use ${APP_PATH} whenever you need it in your build.

After a few years :-) and on an unrelated topic, I found the Active choices plugin also known as / forked from uno choice plugin which allows to define parameters which dynamically update when changing other parameters:
1) define an active choice parameter named states with the following groovy-script content
return[
'Sao Paulo',
'Rio de Janeiro',
'Parana',
'Acre'
]
2) Define an active choice reactive parameter named cities which reacts to changes in the first parameter's value, with the following groovy-script content:
if (States.equals("Sao Paulo")) {
return ["Barretos", "Sao Paulo", "Itu"]
} else if (States.equals("Rio de Janeiro")) {
return ["Rio de Janeiro", "Mangaratiba"]
} else if (States.equals("Parana")) {
return ["Curitiba", "Ponta Grossa"]
} else if (States.equals("Acre")) {
return ["Rio Branco", "Acrelandia"]
} else {
return ["Unknown state"]
}
Thus, each time the state changes, then the list of selectable cities will be updated accordingly:

Related

How to pass parameters or variables of arm template in log query

While working on log Queries in an arm template, I stuck with how to pass parameter values or variable values in the log Query.
parameters:
{
"resProviderName":
{
"value": "Microsoft.Storage"
}
}
For Eg:
AzureMetrics | where ResourceProvider == **parameters('resProviderName')** | where Resource == 'testacc'
Here I am facing an error like, it was taking parameters('resProviderName') as a value and it was not reading value from that particular parameter "resProviderName" and my requirement is to take the values from parameters or variables and should not hardcode like what I did in the above query as Resource=='testacc'.
Do we have any option to read the values from either parameters or variables in the log query?
If so, please help on this issue.
The answer to this would depend on what this query segment is a part of, and how your template is structured.
It appears that you're trying to reference a resource in the query. It is best to use one of the many available template resource functions if you want the details of the resource like its resourceID (within the resources section). When referencing a resource that is deployed in the same template, provide the name of the resource through a parameter. When referencing a resource that isn't deployed in the same template, fetch the resource ID.
Also, I'd recommend referring to the ARM snippet given this example to understand how queries can be constructed as custom variables as opposed to the other way round. According to ARM template best practices, variables should be used for values that you need to use more than once in a template. If a value is used only once, a hard-coded value makes your template easier to read. For more info, please take a look at this doc.
Hope this helps.
This is an old question but I bumped into this while searching for the same issue.
I'm much more familiar with Bicep templates so what I did to figure out was to create a Bicep template, construct the query by using the variables and compile it. This will generate a ARM template and you can analyze it.
I figured out you can use both concat or format functions in order to construct your query using variables.
I preferred the format one since it looks more elegant and readable (also, Bicep build generates the string using the format function).
So based on this example, the query would be something like this:
query: "[format('AzureMetrics | where ResourceProvider == {0} | where Resource == ''testacc'' ', parameters('resProviderName') )]"
Don't forget to escape the ' in the ARM template which you do by doubling the single quote.
I hope this helps some people.
André

textX: How to generate object names with ObjectProcessors?

I have a simple example model where I would like to generate names for the objects of the Position rule that were not given a name with as <NAME>. This is needed so that I can find them later with the built-in FQN scope provider.
My idea would be to do this in the position_name_generator object processor but that will be only be called after the whole model is parsed. I don´t really understand the reason for that, since by the time I would need a Position object in the Project, the objects are already created, still the object processor will not be called.
Another idea would be to do this in a custom scope provider for Position.location which would then first do the name generation and then use the built-in FQN to find the Location object. Although this would work, I consider this hacky and I would prefer to avoid it.
What would be the textX way of solving this issue?
(Please take into account that this is only a small example. In reality a similar functionality is required for a rather big and complex model. To change this behaviour with the generated names is not possible since it is a requirement.)
import textx
MyLanguage = """
Model
: (locations+=Location)*
(employees+=Employee)*
(positions+=Position)*
(projects+=Project)*
;
Project
: 'project' name=ID
('{'
('use' use=[Position])*
'}')?
;
Position
: 'define' 'position' employee=[Employee|FQN] '->' location=[Location|FQN] ('as' name=ID)?
;
Employee
: 'employee' name=ID
;
Location
: 'location' name=ID
( '{'
(sub_location+=Location)+
'}')?
;
FQN
: ID('.' ID)*
;
Comment:
/\/\/.*$/
;
"""
MyCode = """
location Building
{
location Entrance
location Exit
}
employee Hans
employee Juergen
// Shall be referred to with the given name: "EntranceGuy"
define position Hans->Building.Entrance as EntranceGuy
// Shall be referred to with the autogenerated name: <Employee>"At"<LastLocation>
define position Juergen->Building.Exit
project SecurityProject
{
use EntranceGuy
use JuergenAtExit
}
"""
def position_name_generator(obj):
if "" == obj.name:
obj.name = obj.employee.name + "At" + obj.location.name
def main():
meta_model = textx.metamodel_from_str(MyLanguage)
meta_model.register_scope_providers({
"Position.location": textx.scoping.providers.FQN(),
})
meta_model.register_obj_processors({
"Position": position_name_generator,
})
model = meta_model.model_from_str(MyCode)
assert model, "Could not create model..."
if "__main__" == __name__:
main()
What is the textx way to solve this...
The use case you describe is to define the name of an object based on other model elements, including a reference to other model elements. This is currently not part of any test and use cases included in our test suite and the textx docu.
Object processors are executed at defined stages during model construction (see http://textx.github.io/textX/stable/scoping/#using-the-scope-provider-to-modify-a-model). In the described setup they are executed after reference resolution. Since the name to be defined/deduced itself is required for reference resolution, object processors cannot be used here (even if we allow to control when object processors are executed, before or after scope resolution, the described setup still will not work).
Given the dynamics of model loading (see http://textx.github.io/textX/stable/scoping/#using-the-scope-provider-to-modify-a-model), the solution is located within a scope provider (as you suggested). Here, we allow to control the order of reference resolution, such that references to the object being named by a custom procedure are postponed, until references required to deduce/define the name resolved.
Possible workaround
A preliminary sketch of how your use case can be solved is discussed in a https://github.com/textX/textX/pull/194 (with an attached issue https://github.com/textX/textX/issues/193). This textx PR contains a version of scoping.py you could probably use for your project (just copy and rename the module). A full-fledged solution could be part of the textx TEP-001, where we plan to make scoping more controllable to the end-user.
Playing around with this absolutely interesting issue revealed new aspects to me for the textx framework.
names dependent on model contents (involving unresolved references). This name resolution, which can be Postponed (in the referenced PR, see below), in terms of our reference resolution logic.
Even more interesting are the consequences of that: What happens to references pointing to locations, where unresolved names are found? Here, we must postpone the reference resolution process, because we cannot know if the name might match when resolved...
Your example is included: https://github.com/textX/textX/blob/analysis/issue193/tests/functional/test_scoping/test_name_resolver/test_issue193_auto_name.py

How to check whether a file is serial or partitioned using abinitio functions only

I need to resolve a parameter value depending upon whether I have a serial or multifile. Below is the scenario...
I have created a generic graph where I have a reformat component just after the input file component... At run time! I need to check input file if it is serial or multi... And accordingly I have to populate the layout of reformat...!
Hence.. To achieve this I am looking for some specific abinitio function...!
Thanks
I think there is a function - m_fs_check.
You can use this function in the graph parameters and use the resolved value as a condition to determine the layout.
m_fs_check will check if the directory is a serial or multi directory. However a user can still create a serial file on a multi directory. One option is fire a m_ls -lt command. Result displays a flag 'M' which denotes that a file is multi file. For serial files this flag remains blank.
Use m_expand($INPUT_FILE_PATH) in PDL at PSET level to identify directory depth. if depth is greater than one then its multifile else serial.then use output flag into your reformat.

Is there an easy way for cfengine3 to copy different files based on the OS its running

I have two different versions of linux/unix each running cfengine3. Is it possible to have one promises.cf file I can put on both machines that will copy different files based on what os is on the clients? I have been searching around the internet for a few hours now and have not found anything useful yet.
There are several ways of doing this. At the simplest, you can simply have different files: promises depending on the operating system, for example:
files:
ubuntu_10::
"/etc/hosts"
copy_from => mycopy("$(repository)/etc.hosts.ubuntu_10");
suse_9::
"/etc/hosts"
copy_from => mycopy("$(repository)/etc.hosts.suse_9");
redhat_5::
"/etc/hosts"
copy_from => mycopy("$(repository)/etc.hosts.redhat_5");
windows_7::
"/etc/hosts"
copy_from => mycopy("$(repository)/etc.hosts.windows_7");
This example can be easily simplified by realizing that the built-in CFEngine variable $(sys.flavor) contains the type and version of the operating system, so we could rewrite this example as follows:
"/etc/hosts"
copy_from => mycopy("$(repository)/etc.$(sys.flavor)");
A more flexible way to achieve this task is known in CFEngine terminology as "hierarchical copy." In this pattern, you specify an arbitrary list of variables by which you want files to be differentiated, and the order in which they should be considered, from most specific to most general. When the copy promise is executed, the most-specific file found will be copied.
This pattern is very simple to implement:
# Use single copy for all files
body agent control
{
files_single_copy => { ".*" };
}
bundle agent test
{
vars:
"suffixes" slist => { ".$(sys.fqhost)", ".$(sys.uqhost)", ".$(sys.domain)",
".$(sys.flavor)", ".$(sys.ostype)", "" };
files:
"/etc/hosts"
copy_from => local_dcp("$(repository)/etc/hosts$(suffixes)");
}
As you can see, we are defining a list variable called $(suffixes) that contains the criteria by which we want to differentiate the files. All the variables contained in this list are automatically defined by CFEngine, although you could use any arbitrary CFEngine variables. Then we simply include that variable, as a scalar, in our copy_from parameter. Because CFEngine does automatic list expansion, it will try each variable in turn, executing the copy promise multiple times (one for each value in the list) and copy the first file that exists. For example, for a Linux SuSE 11 machine called superman.justiceleague.com, the #(suffixes) variable will contain the following values:
{ ".superman.justiceleague.com", ".superman", ".justiceleague.com", ".suse_11",
".linux", "" }
When the file-copy promise is executed, implicit looping will cause these strings to be appended in sequence to "$(repository)/etc/hosts", so the following filenames will be attempted in sequence: hosts.superman.justiceleague.com, hosts.justiceleague.com, hosts.suse_11, hosts.linux and hosts. The first one to exist will be copied over /etc/hosts in the client, and the rest will be skipped.
For this technique to work, we have to enable "single copy" on all the files you want to process. This is a configuration parameter that tells CFEngine to copy each file at most once, ignoring successive copy operations for the same destination file. The files_single_copy parameter in the agent control body specifies a list of regular expressions to match filenames to which single-copy should apply. By setting it to ".*" we match all filenames.
For hosts that don't match any of the existing files, the last item on the list (an empty string) will cause the generic hosts file to be copied. Note that the dot for each of the filenames is included in $(suffixes), except for the last element.
I hope this helps.
(p.s. and shameless plug: this is taken from my upcoming book, "Learning CFEngine 3", published by O'Reilly)

How to resolve naming conflicts when multiply instantiating a program in VxWorks

I need to run multiple instances of a C program in VxWorks (VxWorks has a global namespace). The problem is that the C program defines global variables (which are intended for use by a specific instance of that program) which conflict in the global namespace. I would like to make minimal changes to the program in order to make this work. All ideas welcomed!
Regards
By the way ... This isn't a good time to mention that global variables are not best practice!
The easiest thing to do would be to use task Variables (see taskVarLib documentation).
When using task variables, the variable is specific to the task now in context. On a context switch, the current variable is stored and the variable for the new task is loaded.
The caveat is that a task variable can only be a 32-bit number.
Each global variable must also be added independently (via its own call to taskVarAdd?) and it also adds time to the context switch.
Also, you would NOT be able to share the global variable with other tasks.
You can't use task variables with ISRs.
Another Possibility:
If you are using Vxworks 6.x, you can make a Real Time Process application.
This follows a process model (similar to Unix/Windows) where each instance of your program has it's own global memory space, independent of any other instance.
I had to solve this when integrating two third-party libraries from the same vendor. Both libraries used some of the same symbol names, but they were not compatible with each other. Because these were coming from a vendor, we couldn't afford to search & replace. And task variables were not applicable either since (a) the two libs might be called from the same task and (b) some of the dupe symbols were functions.
Assume we have app1 and app2, linked, respectively, to lib1 and lib2. Both libs define the same symbols so must be hidden from each other.
Fortunately (if you're using GNU tools) objcopy allows you to change the type of a variable after linking.
Here's a sketch of the solution, you'll have to modify it for your needs.
First, perform a partial link for app1 to bind it to lib1. Here, I'm assuming that you've already partially linked *.o in app1 into app1_tmp1.o.
$(LD_PARTIAL) $(LDFLAGS) -Wl,-i -o app1_tmp2.o app1_tmp1.o $(APP1_LIBS)
Then, hide all of the symbols from lib1 in the tmp2 object you just created to generate the "real" object for app1.
objcopymips `nmmips $(APP1_LIBS) | grep ' [DRT] ' | sed -e's/^[0-9A-Fa-f]* [DRT] /-L /'` app1_tmp2.o app1.o
Repeat this for app2. Now you have app1.o and app2.o ready to link into your final application without any conflicts.
The drawback of this solution is that you don't have access to any of these symbols from the host shell. To get around this, you can temporarily turn off the symbol hiding for one or the other of the libraries for debugging.
Another possible solution would be to put your application's global variables in a static structure. For example:
From:
int global1;
int global2;
int someApp()
{
global2 = global1 + 3;
...
}
TO:
typedef struct appGlobStruct {
int global1;
int global2;
} appGlob;
int someApp()
{
appGlob.global2 = appGlob.global1 + 3;
}
This simply turns into a search & replace in your application code. No change to the structure of the code.