Image synchronisation in a single Region - fiware

I would know something about fiware-glancesync component. I would like to synchronise only one image. I mean, I want to synchronise one single image in a region without modifying the current configuration file. How can I define a new configuration parameters (if it is possible) to do it with the GlanceSync?


The algorithm used to select the images can be defined by the user. The easiest
and best way to suncrhonise only one or a set of images is modifying the glancesync.conf configuration file inside ./conf directory. I recomend the creation of a new section [test] in order that you do not modify the current [master] section. Just write the following lines:
[test]
metadata_condition = image.name == 'GIS_GE'
credential= admin,<your secret>,http://130.206.112.3:5000/v2.0,admin
Keep in mind that '130.206.112.3' is the IP of the keystone service inside the FIWARE Lab, and the first and second admin are the OS_USERNAME and OS_TENANT_NAME. Last but not least 'your secret' is the password in base64 format.
And then, only execute the command:
./sync.py test:<name of the node, e.g. Lannion2>
See documentation in GlanceSync - Glance Synchronization Component in order to know more details about the image synchronization.
If you can obtain more information about the configuration of the GlanceSync, take a look to GlanceSync Configuration.

Related

How can I set a variable in my workshop module via URL?

I currently have a workshop module that allows users to view a set of objects and then filter them according to a filter widget.
I would like my users to be able to set which filters are applied via URL such that they can quickly apply their default filters.
How can I achieve this?
Create a variable for each filter you would like the user to be able to apply via URL.
Promote the variable so that it can be update via the URL.
Update the filter output to use the variables you created above.
In addition to using the promoted variables described in the answer above, specifically for the case of saving and re-using Filter state, you can enable the State Saving feature for users of your app, which allows them to snapshot and save specific application state as a separate Foundry resource. You can then share the link or bookmark or re-open from a folder without worrying about the "wiring" of variables and URL parameters.

Asterisk sip.conf in MySQL database

I'm able to include a phone inside a database for realtime usage. So, this code (from /etc/asterisk/sip.conf):
[phone]
type=friend
username=phone
secret=12345
host=dynamic
disallow=all
allow=g729
allow=alaw
context=somecontext
nat=no
insecure=port,invite
it is now inside a database (using MySQL).
Now, I want to include a SIP trunk using the register directive, but I don't know how to do that.
How can I include register => <username>:<password>#<provider> inside the database as well?
You have 2 options.
1) static realtime. Just put in mysql line-by-line whole file
https://www.voip-info.org/asterisk-realtime-static
In this mode when you issue asterisk reload it just read from database line-by-line and interpret it as text file.
2) dynamic realtime.
In this mode asterisk check database only when have request for auth and only for matched peers.
https://www.voip-info.org/asterisk-realtime-sip/
Use regserver param to put your registration server.
The register directive should be a static entry in sip.conf [general] section, so while you could do this with static realtime, you may then have problems loading dynamic realtime users.
Your best option may be to use the #exec directive in sip.conf. This will allow you to run a script to read that register line from a db string.
To do this, you will need to enable 'execinclude = yes' in asterisk.conf and then add a line in sip.conf [general] section to run your script, like:
#exec /etc/asterisk/scripts/your_script_file
Here is a nice example from Leif Madsen using #exec to set externip= paramater via a php script:
https://leifmadsen.wordpress.com/2011/02/27/using-exec-to-set-externaddr-in-sip-conf/

Zabbix, naming hostname in templates

Initially, I wanted a way to change when I receive a notification for free hard disk space, which I have successfully done. However, I would like to implement this rule on one of the existing templates rather than individually add it to each host. However, when I do attempt to add the trigger by using the expression {hostname:vfs.fs.size[drive:,pfree].last(0)}<5, I am confused as to what to put as the hostname, since I am trying to put this in a template for multiple hosts.
I have tried to name it the template name that consists of the hosts, but have been unsuccessful.
Thanks!
Trigger are associated with hosts they reference items from. To create a trigger "in" a template, reference an item from that template like so:
{template_name:vfs.fs.size[drive:,pfree].last(0)}<5

How to pass directives to snappy_ec2 created clusters

We have a need to set some directives in the snappy config files for the various components (servers, locators, etc).
The snappy_ec2 scripts do a good job at creating all of the config's and keeping them in sync across the cluster, but I need to find a serviceable method to add directives to the auto generated scripts.
What is the preferred method using this script?
Example: Add the following to the 'servers' file:
-gemfirexd.disable-getall-local-index=true
Or perhaps I should add these strings to an environments file such as
snappy-env.sh
TIA
-doug
Have you tried adding the directives directly in the servers (or locators or leads) file and placing this file under (SNAPPY_DIR)/ec2/deploy/home/ec2-user/snappydata/? The script would read the conf files under this dir at the time of launching the cluster.
You'll need to specify it for each server you want to launch, with the name of server as shown below. See 'Specifying properties' section in README, if you have not already done so. e.g.
{{SERVER_0}} -heap-size=4096m -locators={{LOCATOR_0}}:9999,{{LOCATOR_1}}:9888 -J-Dgemfirexd.disable-getall-local-index=true
{{SERVER_1}} -heap-size=4096m -locators={{LOCATOR_0}}:9999,{{LOCATOR_1}}:9888 -J-Dgemfirexd.disable-getall-local-index=true
If you want it to be applied for all the servers, simply put it in snappy-env.sh as you mentioned (as SERVER_STARTUP_OPTIONS) and place the file under directory mentioned above.
We could have read the conf files directly from (SNAPPY_DIR)/conf/ instead of making users copy it to above location, but we may release the ec2 scripts as a separate package, in future, so that the users do not have to download the entire distribution.

Managing configuration in Erlang application

I need to distribute some sort of static configuration through my application. What is the best practice to do that?
I see three options:
Call application:get_env directly whenever a module requires to get configuration value.
Plus: simpler than other options.
Minus: how to test such modules without bringing the whole application thing up?
Minus: how to start certain module with different configuration (if required)?
Pass the configuration (retrieved from application:get_env), to application modules during start-up.
Plus: modules are easier to test, you can start them with different configuration.
Minus: lot of boilerplate code. Changing the configuration format requires fixing several places.
Hold the configuration inside separate configuration process.
Plus: more-or-less type-safe approch. Easier to track where certain parameter is used and change those places.
Minus: need to bring up configuration process before running the modules.
Minus: how to start certain module with different configuration (if required)?
Another approach is to transform your configuration data into an Erlang source module that makes the configuration data available through exports. Then you can change the configuration at any time in a running system by simply loading a new version of the configuration module.
For static configuration in my own projects, I like option (1). I'll show you the steps I take to access a configuration parameter called max_widgets in an application called factory.
First, we'll create a module called factory_env which contains the following:
-define(APPLICATION, factory).
get_env(Key, Default) ->
case application:get_env(?APPLICATION, Key) of
{ok, Value} -> Value;
undefined -> Default
end.
set_env(Key, Value) ->
application:set_env(?APPLICATION, Key, Value).
Next, in a module that needs to read max_widgets we'll define a macro like the following:
-define(MAX_WIDGETS, factory_env:get_env(max_widgets, 1000)).
There are a few nice things about this approach:
Because we used application:set_env/3 and application:get_env/2, we don't actually need to start the factory application in order to have our tests pass.
max_widgets gets a default value, so our code will still work even if the parameter isn't defined.
A second module could use a different default value for max_widgets.
Finally, when we are ready to deploy, we'll put a sys.config file in our priv directory and load it with -config priv/sys.config during startup. This allows us to change configuration parameters on a per-node basis if desired. This cleanly separates configuration from code - e.g. we don't need to make another commit in order to change max_widgets to 500.
You could use a process (a gen_server maybe?) to store your configuration parameters in its state. It should expose a get/set interface. If a value hasn't been explicitly set, it should retrieve a default value.
-export([get/1, set/2]).
...
get(Param) ->
gen_server:call(?MODULE, {get, Param}).
...
handle_call({get, Param}, _From, State) ->
case lookup(Param, State#state.params) of
undefined ->
application:get_env(...);
Value ->
{ok, Value}
end.
...
You could then easily mockup this module in your tests. It will also be easy to update the process with some new configuration at run-time.
You could use pattern matching and tuples to associate different configuration parameters to different modules:
set({ModuleName, ParamName}, Value) ->
...
get({ModuleName, ParamName}) ->
...
Put the process under a supervision tree, so it's started before all the other processes which are going to need the configuration.
Oh, I'm glad nobody suggested parametrized modules so far :)
I'd do option 1 for static configuration. You can always test by setting options via application:set_env/3,4. The reason you want to do this is that your tests of the application will need to run the whole application anyway at some time. And the ability to set test-specific configuration at that point is really neat.
The application controller runs by default, so it is not a problem that you need to go the application-way (you need to do that anyway too!)
Finally, if a process needs specific configuration, say so in the configuration data! You can store any Erlang-term, in particular, you can store a term which makes you able to override configuration parameters for a specific node.
For dynamic configuration, you are probably better off by using a gen_server or using the newest gproc features that lets you store such dynamic configuration.
I've also seen people use a .hrl (erlang header file) where all the configuration is defined and include it at the start of any file that needs configuration.
It makes for very concise configuration lookups, and you get configuration of arbitrary complexity.
I believe you can also reload configuration at runtime by performing hot code reloading of the module. The disadvantage is that if you use configuration in several modules and reload only one of them, only that one module will get its configuration updated.
However, I haven't actually checked if it works like that, and I couldn't find definitive documentation on how .hrl and hot code reloading interact, so make sure to double-check this before you actually use it.