I read the state file about vim-formula on github --> Here
There is a file named salt.sls :
{% from "vim/map.jinja" import vim with context %}
include:
- vim
sakt_vimfiles:
file.recurse:
- name: {{ vim.share_dir }}
- source: salt://vim/files/salt
But I couldn't find vim.sls which included in salt.sls in current directory.I learned the guidbook in saltstack's website,and I know the word include means to reuse a state file,right?
So I think it must be related to the jinja2 {% from "vim/map.jinja" import vim with context %}
and map.jinja :
{% set vim = salt['grains.filter_by']({
'Arch': {
'pkg': 'vim',
'share_dir': '/usr/share/vim/vimfiles',
'group': 'root',
'config_root': '/etc',
},
'Debian': {
'pkg': 'vim',
'share_dir': '/usr/share/vim/vimfiles',
'group': 'root',
'config_root': '/etc/vim',
},
'RedHat': {
'pkg': 'vim-enhanced',
'share_dir': '/usr/share/vim/vimfiles',
'group': 'root',
'config_root': '/etc',
},
'Suse': {
'pkg': 'vim',
'share_dir': '/usr/share/vim/site',
'group': 'root',
'config_root': '/etc',
},
'FreeBSD': {
'pkg': 'vim',
'share_dir': '/usr/local/share/vim/vimfiles',
'group': 'wheel',
'config_root': '/etc',
},
}, merge=salt['pillar.get']('vim:lookup')) %}
I must agree, it is insane to read Jinja. And it is even WORST if you are a newbie for salt file structure. It is all about the poor emphasize on basic inside the documentation that cause confusion. Indeed, you need to understand many basic saltstack setup structure to avoid confusion.
Now to the answer .
Imagine you copy the whole formula folder to your salt-states folder, say your (salt master configuration /etc/salt/master) files_root: are /srv/salt/states
Actually, the formula ASSUME you copy the github formula source /vim folder under the file roots. Thus you should have something like /srv/salt/states/vim.
Now come to the fun part : the files_roots is /srv/salt/states, so for salt-master file-system, anything start from the folder are consider salt:// . And since your vim folder is under it, so it will be refer as salt://vim.
Now back to your salt.sls in the /srv/salt/states/vim, it have no problem finding the include : - vim
The Saltstatck Get started is much better for beginner to start with. Just repeat the tutorial for a few time, it will help solve most of the confusion.
(more)
Again another basic : How saltstack traverse folder. This also explain how the include find the correct file.
If you have something like this
base:
myserver:
- app
- db.myserver
So for the first app, there is two ways to write the state
First way : put the state into app.sls
Second way : create a folder call app , then put the state into app/init.sls
The first way is straightforward. The second way is kinda "magic" if you didn't read the basic. In fact, init.sls is the state file. You can put many .sls inside the app folder. But salt don't care about other, unless you call them explicit, i.e. , as in the example, db.server
# This is the first way, direct reference to the sls file
+--app.sls
+--/db
+--myserver.sls
# Second way, using init.sls as anchor inside folder.
+--/app
+-- init.sls
+--/db
+--/myserver
+--init.sls
So come to Second way db.server
Now this look straightforward, salt master just traverse into salt://db/ folder, look for server.sls.
However, mix with the first way, you should know, You may write the state file into /db/server/init.sls, that's how salt looks for the file.
Now, go back to your vim formula folder, just read the /vim/init.sls. Now you understand the include: vim mean traverse into salt://vim/init.sls OR salt://vim.sls .
You may ask, what if mix both structure? My suggestion : Don't do it. You will confuse yourself and those who maintain your saltstack .
Related
I have been looking for information in Google and stackoverflow, but I dind't find a good solution.
I need to handle a list, add elements, delete elements... but saved in a file. This is in order to avoid losing the list when the execution finish, because I need to execute my python script periodically. Here are alternatives I found, but they have some problems
Shelve module: can't find how to delete a element in the list (such as list.pop() ) instead of deleting all the list.
pprint.pformat() : to modify information, I need to delete all the document and save the modifed information, very inefficient.
json: tediuos for just a list and doesn't seems to solve my problem
So, which is the best way to handle a list, doing things as easy as mylist.pop() keeping the changes in a file in an efficient way?
Since this has never been answered before, here is an efficient way. The package pysos can handle disk backed lists with inserts/deletes in constant time.
pip install pysos
Code example:
import pysos
db = pysos.List('somefile')
db.append('saved in the file 0')
db.append('saved in the file 1')
db.append('saved in the file 2')
print(db[1]) # => 'saved in the file 1'
del db[1]
print(db[1]) # => 'saved in the file 2'
In Airflow, we've created several DAGS. Some of which share common properties, for example the directory to read files from. Currently, these properties are listed as a property in each separate DAG, which will obviously become problematic in the future. Say if the directory name was to change, we'd have to go into each DAG and update this piece of code (possibly even missing one).
I was looking into creating some sort of a configuration file, which can be parsed into Airflow and used by the various DAGS when a certain property is required, but I cannot seem to find any sort of documentation or guide on how to do this. Most I could find was the documentation on setting up Connection ID's, but that does not meet my use case.
The question to my post, is it possible to do the above scenario and how?
Thanks in advance.
There are a few ways you can accomplish this based on your setup:
You can use a DagFactory type approach where you have a function generate DAGs. You can find an example of what that looks like here
You can store a JSON config as an Airflow Variable, and parse through that to generate a DAG. You can store something like this in a Admin -> Variables:
[
{
"table": "users",
"schema": "app_one",
"s3_bucket": "etl_bucket",
"s3_key": "app_one_users",
"redshift_conn_id": "postgres_default"
},
{
"table": "users",
"schema": "app_two",
"s3_bucket": "etl_bucket",
"s3_key": "app_two_users",
"redshift_conn_id": "postgres_default"
}
]
Your DAG could get generated as:
sync_config = json.loads(Variable.get("sync_config"))
with dag:
start = DummyOperator(task_id='begin_dag')
for table in sync_config:
d1 = RedshiftToS3Transfer(
task_id='{0}'.format(table['s3_key']),
table=table['table'],
schema=table['schema'],
s3_bucket=table['s3_bucket'],
s3_key=table['s3_key'],
redshift_conn_id=table['redshift_conn_id']
)
start >> d1
Similarly, you can just store that config as a local file and open it as you would any other file. Keep in mind the best answer to this will depend on your infrastructure and use case.
In Laravel 5 I am trying to create two different css files for my frontend site and backend site (cms). The source files are in two different directories.
The default value for assets in
first the backend
elixir.config.assetsDir = 'resources/backend/';
elixir(function (mix) {
mix.less('backend.less');
});
Second the frontend
elixir.config.assetsDir = 'resources/frontend/';
elixir(function (mix) {
mix.less('frontend.less');
});
Both are in the same gulpfile.js.
These are the directories (Laravel 5)
resources
backend
less
backend.less
frontend
less
frontend.less
Only the frontend file is compiled to public/css/frontend.css.
I also tried
mix.less('frontend.less', null, 'resources/frontend/');
though this is working for mixing script files it is not working for mixing less files.
**Update 28-3-2015 **
There seems to be no solution for my problem. When I do:
elixir.config.assetsDir = 'resources/frontend/';
mix.less('frontend.less');
elixir.config.assetsDir = 'resources/backend/';
mix.less('backend.less');
Only the last one (backend) is executed. When I place the last two lines in comments the first one (frontend )is executed. It's Ok for now because the backend styles should not change very often but it would be very nice to mix multiple less files from multiple resource folders to multiple destination folders.
Try:
elixir(function(mix) {
mix.less([
'frontend/frontend.less',
'backend/backend.less'
], null, './resources');
});
Instead of your variant:
elixir(function(mix) {
elixir.config.assetsDir = 'resources/frontend/';
mix.less('frontend.less');
elixir.config.assetsDir = 'resources/backend/';
mix.less('backend.less');
});
Try this code:
elixir.config.assetsDir = 'resources/frontend/';
elixir(function(mix) {
mix.less('frontend.less');
});
elixir.config.assetsDir = 'resources/backend/';
elixir(function(mix) {
mix.less('backend.less');
});
I have been playing around with this for a couple days and the best option I've found so far is as follows.
First leave your resources files in the default location, so for less files look in resources/assets/less. Then to separate the files into your front and back end resources add sub folders in the specific resource folder like so,
resources/assets/less/frontend/frontend.less
resources/assets/less/backend/backend.less
Now call each one like so..
mix.less('frontend/frontend.less', 'public/css/frontend/frontend.css');
mix.less('backend/backend.less', 'public/css/backend/backend.css');
The second parameter provided to each mix.less can point to wherever you want it to.
You can't split at the highest level directly in the resource root, but it still allows some separation, and everything compiled in one gulp.
I have found the following to work:
elixir(function (mix) {
mix
.less(['app.less'], 'public/css/app.css')
.less(['bootstrap.less'], 'public/css/bootstrap.css');
});
The key things to notice:
provide the file name in the destination, i.e. writing public/css/app.css instead of public/css/
chain the .less calls instead of making two separate mix.less() calls
Works for me with laravel-elixir version 3.4.2
I have built a custom contenttype with an image field in Bolt 2.0.
image:
type: image
If no folder is specified the uploaded file goes to a folder named by the year-month
Result: 2014-11/myFileName.jpg
With the tag upload I can change this to something else.
image:
type: image
upload: "News/"
Result: News/myFileName.jpg
Is it possible to get the year-month folders after my costom path?
Result: News/2014-11/myFileName.jpg
The answer to this is yes, but not very simply so if you want a configurable way to do this you need to wait for 2.1 of Bolt where we're going to add variables to the upload: setting.
If you don't mind setting up your own bootstrap file and modifying the application then you can do it now.
The date prefix is generated by the $app['upload.prefix'] setting and currently returns the date string. What you need to do to modify this is change this to your own closure. I haven't tested this on a project so tweak if needed but after:
$app->initialize();
// Redefine the closure
$app['upload.prefix'] = function() {
$setting = $app['request']->get('handler');
$parts = explode('://', $setting);
$prefix = rtrim($parts[0], '/') . '/';
return $prefix.date('Y-m') . '/';
};
$app->run();
What we're doing here is reading the setting which is passed in the request and then concatenating the default date prefix onto the end of it.
As mentioned earlier 2.1 will see variable support introduced into the paths so options like
upload: news/{%month%}/{%day%}
upload: uploads/{%contenttype%}/{%id%}
will be easily definable in the contenttypes.yml file so If you don't mind waiting for a couple of months then this is obviously much simpler.
As of 3.2.9 this {%id%} principle doesn't seem to work yet ... :(
I have two different versions of linux/unix each running cfengine3. Is it possible to have one promises.cf file I can put on both machines that will copy different files based on what os is on the clients? I have been searching around the internet for a few hours now and have not found anything useful yet.
There are several ways of doing this. At the simplest, you can simply have different files: promises depending on the operating system, for example:
files:
ubuntu_10::
"/etc/hosts"
copy_from => mycopy("$(repository)/etc.hosts.ubuntu_10");
suse_9::
"/etc/hosts"
copy_from => mycopy("$(repository)/etc.hosts.suse_9");
redhat_5::
"/etc/hosts"
copy_from => mycopy("$(repository)/etc.hosts.redhat_5");
windows_7::
"/etc/hosts"
copy_from => mycopy("$(repository)/etc.hosts.windows_7");
This example can be easily simplified by realizing that the built-in CFEngine variable $(sys.flavor) contains the type and version of the operating system, so we could rewrite this example as follows:
"/etc/hosts"
copy_from => mycopy("$(repository)/etc.$(sys.flavor)");
A more flexible way to achieve this task is known in CFEngine terminology as "hierarchical copy." In this pattern, you specify an arbitrary list of variables by which you want files to be differentiated, and the order in which they should be considered, from most specific to most general. When the copy promise is executed, the most-specific file found will be copied.
This pattern is very simple to implement:
# Use single copy for all files
body agent control
{
files_single_copy => { ".*" };
}
bundle agent test
{
vars:
"suffixes" slist => { ".$(sys.fqhost)", ".$(sys.uqhost)", ".$(sys.domain)",
".$(sys.flavor)", ".$(sys.ostype)", "" };
files:
"/etc/hosts"
copy_from => local_dcp("$(repository)/etc/hosts$(suffixes)");
}
As you can see, we are defining a list variable called $(suffixes) that contains the criteria by which we want to differentiate the files. All the variables contained in this list are automatically defined by CFEngine, although you could use any arbitrary CFEngine variables. Then we simply include that variable, as a scalar, in our copy_from parameter. Because CFEngine does automatic list expansion, it will try each variable in turn, executing the copy promise multiple times (one for each value in the list) and copy the first file that exists. For example, for a Linux SuSE 11 machine called superman.justiceleague.com, the #(suffixes) variable will contain the following values:
{ ".superman.justiceleague.com", ".superman", ".justiceleague.com", ".suse_11",
".linux", "" }
When the file-copy promise is executed, implicit looping will cause these strings to be appended in sequence to "$(repository)/etc/hosts", so the following filenames will be attempted in sequence: hosts.superman.justiceleague.com, hosts.justiceleague.com, hosts.suse_11, hosts.linux and hosts. The first one to exist will be copied over /etc/hosts in the client, and the rest will be skipped.
For this technique to work, we have to enable "single copy" on all the files you want to process. This is a configuration parameter that tells CFEngine to copy each file at most once, ignoring successive copy operations for the same destination file. The files_single_copy parameter in the agent control body specifies a list of regular expressions to match filenames to which single-copy should apply. By setting it to ".*" we match all filenames.
For hosts that don't match any of the existing files, the last item on the list (an empty string) will cause the generic hosts file to be copied. Note that the dot for each of the filenames is included in $(suffixes), except for the last element.
I hope this helps.
(p.s. and shameless plug: this is taken from my upcoming book, "Learning CFEngine 3", published by O'Reilly)