The saltstack docs note the existence of the dns_check jinja filter in 2017.7.3:
{{ 'www.google.com' | dns_check }}
which should return an ip4v address as a string.
But when I try it:
test_this_one:
cmd.run:
- name: |
echo {{ 'www.google.com' | dns_check }}
I instead see
local:
Data failed to compile:
----------
Rendering SLS 'base:firewall' failed: Jinja error: dns_check() takes at least 2 arguments (1 given)
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/salt/utils/templates.py", line 418, in render_jinja_tmpl
output = template.render(**decoded_context)
File "/usr/lib/python2.7/dist-packages/jinja2/environment.py", line 989, in render
return self.environment.handle_exception(exc_info, True)
File "/usr/lib/python2.7/dist-packages/jinja2/environment.py", line 754, in handle_exception
reraise(exc_type, exc_value, tb)
File "<template>", line 42, in top-level template code
TypeError: dns_check() takes at least 2 arguments (1 given)
Am I missing something? I'm inclined to believe I've made a mistake rather than that the docs are so publicly wrong.
Looking at the source code it looks like dns_check now takes a port argument - the docstring says:
Tries to connect to the address before considering it useful. If no address can be reached, the first one resolved is used as a fallback.
https://github.com/saltstack/salt/blob/06a00be0e1f06399805e19261e4d00f6cfd9c6a0/salt/utils/network.py#L1750
So it's probably enough to put any port in here and it should work. (and perhaps an issue should be raised about making port optional?)
Related
I am trying to check if a particular file with some extension exists on a centos host using salt stack.
create:
cmd.run:
- name: touch /tmp/filex
{% set output = salt['cmd.run']("ls /tmp/filex") %}
output:
cmd.run:
- name: "echo {{ output }}"
Even if the file exists, I am getting the error as below:
ls: cannot access /tmp/filex: No such file or directory
I see that you already accepted an answer for this that talks about jinja being rendered first. which is true. but i wanted to add to that you don't have to use cmd.run to check the file. there is a state that is built in to salt for this.
file.exists will check for a file or directories existence in a stateful way.
One of the things about salt is you should be looking for ways to get away from cmd.run when you can.
create:
file.managed:
- name: /tmp/filex
check_file:
file.exists:
- name: /tmp/filex
- require:
- file: create
In SaltStack Jinja is evaluated before YAML. The file creation will (cmd.run) be executed after Jinja. So your Jinja variable is empty because the file isn’t created, yet.
See https://docs.saltproject.io/en/latest/topics/jinja/index.html
Jinja statements such as your set output line are evaluated when the sls file is rendered, before any of the states in it are executed. It's not seeing the file because the file hasn't been created yet.
Moving the check to the state definition should fix it:
output:
cmd.run:
- name: ls /tmp/filex
# if your underlying intent is to ensure something runs only
# once the file exists, you can enforce that here
- require:
- cmd: create
I tried converting my .csv file to .dat format and tried to load the file into Octave. It throws an error:
unable to find file filename
I also tried to load the file in .csv format using the syntax
x = csvread(filename)
and it throws the error:
'filename' undefined near line 1 column 13.
I also tried loading the file by opening it on the editor and I tried loading it and now it shows me
warning: load: 'filepath' found by searching load path
error: load: unable to determine file format of 'Salary_Data.dat'.
How can I load my data?
>> load Salary_Data.dat
error: load: unable to find file Salary_Data.dat
>> Salary_Data
error: 'Salary_Data' undefined near line 1 column 1
>> Salary_Data
error: 'Salary_Data' undefined near line 1 column 1
>> Salary_Data
error: 'Salary_Data' undefined near line 1 column 1
>> x = csvread(Salary_Data)
error: 'Salary_Data' undefined near line 1 column 13
>> x = csvread(Salary_Data.csv)
error: 'Salary_Data' undefined near line 1 column 13
>> load Salary_Data.dat
warning: load: 'C:/Users/vaith/Desktop\Salary_Data.dat' found by searching load path
error: load: unable to determine file format of 'Salary_Data.dat'
>> load Salary_Data.csv
warning: load: 'C:/Users/vaith/Desktop\Salary_Data.csv' found by searching load path
error: load: unable to determine file format of 'Salary_Data.csv'
Salary_Data.csv
YearsExperience,Salary
1.1,39343.00
1.3,46205.00
1.5,37731.00
2.0,43525.00
2.2,39891.00
2.9,56642.00
3.0,60150.00
3.2,54445.00
3.2,64445.00
3.7,57189.00
3.9,63218.00
4.0,55794.00
4.0,56957.00
4.1,57081.00
4.5,61111.00
4.9,67938.00
5.1,66029.00
5.3,83088.00
5.9,81363.00
6.0,93940.00
6.8,91738.00
7.1,98273.00
7.9,101302.00
8.2,113812.00
8.7,109431.00
9.0,105582.00
9.5,116969.00
9.6,112635.00
10.3,122391.00
10.5,121872.00
Ok, you've stumbled through a whole pile of issues here.
It would help if you didn't give us error messages without the commands that produced them.
The first message means you were telling Octave to open something called filename and it couldn't find anything called filename. Did you define the variable filename? Your second command and the error message suggests you didn't.
Do you know what Octave's working directory is? Is it the same as where the file is located? From the response to your load commands, I'd guess not. The file is located at C:/Users/vaith/Desktop. Octave's working directory is probably somewhere else.
(Try the pwd command and see what it tells you. Use the file browser or the cd command to navigate to the same location as the file. help pwd and help cd commands would also provide useful information.)
The load command, used as a command (load file.txt) can take an input that is or isn't defined as a string. A function format (load('file.txt') or csvread('file.txt')) must be a string input, hence the quotes around file.txt. So all of your csvread input commands thought you were giving it variable names, not filenames.
Last, the fact that load couldn't read your data isn't overly surprising. Octave is trying to guess what kind of file it is and how to load it. I assume you tried help load to see what the different command options are? You can give it different options to help Octave figure it out. If it actually is a csv file though, and is all numbers not text, then csvread might still be your best option if you use it correctly. help csvread would be good information for you.
It looks from your data like you have a header line that is probably confusing the load command. For data that simply formatted, the csvread command can bring in the data. It will replace your header text with zeros.
So, first, navigate to the location of the file:
>> cd C:/Users/vaith/Desktop
then open the file:
>> mydata = csvread('Salary_Data.csv')
mydata =
0.00000 0.00000
1.10000 39343.00000
1.30000 46205.00000
1.50000 37731.00000
2.00000 43525.00000
...
If you plan to reuse the filename, you can assign it to a variable, then open the file:
>> myfile = 'Salary_Data.csv'
myfile = Salary_Data.csv
>> mydata = csvread(myfile)
mydata =
0.00000 0.00000
1.10000 39343.00000
1.30000 46205.00000
1.50000 37731.00000
2.00000 43525.00000
...
Notice how the filename is stored and used as a string with quotation marks, but the variable name is not. Also, csvread converted non-numeric header data to 'zeros'. The help for csvread and dlmread show you how to change it to something other than zero, or to skip a certain number of rows. If you want to preserve the text, you'll have to use some other input function.
I've been working on writing some mkdocs documentation which includes mermaid diagrams that I'd like to keep in the markdown files instead of turning into images and embedding them
I came across this great solution here: https://github.com/squidfunk/mkdocs-material/issues/693#issuecomment-411885426
Which uses the super-fences feature of the pymdown-extensions plugin to create a custom code block which renders the mermaid diagrams inside the code block.
It works in mkdocs running locally, but when I submit the configuration file to readthedocs it fails the yaml validation
Your mkdocs.yml could not be loaded, possibly due to a syntax error (line 18, column 19)
Line 18 in the mkdocs.yml config file is the section which calls the superfences python class
format: !!python/name:pymdownx.superfences.fence_div_format
Looking in the yaml specification https://yaml.org/spec/1.2/spec.html Shows that !! is for an explicit tag and it seems to have been part of the spec for quite some time ( back to version 1). I've tried making the value a string but this then causes issue with python reading it as a string
Does anyone know if readthedocs supports this or have you been able to get this working some other way?
ReadTheDocs is parsing the mkdocs.yaml file using pyyaml and it seems that it does not recognize !!.
For example:
>>> import yaml
>>> document = """
a: 1
b:
c: 3
d: !!4
"""
>>> print(yaml.dump(yaml.load(document)))
<stdin>:1: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3.8/site-packages/yaml/__init__.py", line 114, in load
return loader.get_single_data()
File "/usr/lib/python3.8/site-packages/yaml/constructor.py", line 51, in get_single_data
return self.construct_document(node)
File "/usr/lib/python3.8/site-packages/yaml/constructor.py", line 60, in construct_document
for dummy in generator:
File "/usr/lib/python3.8/site-packages/yaml/constructor.py", line 413, in construct_yaml_map
value = self.construct_mapping(node)
File "/usr/lib/python3.8/site-packages/yaml/constructor.py", line 218, in construct_mapping
return super().construct_mapping(node, deep=deep)
File "/usr/lib/python3.8/site-packages/yaml/constructor.py", line 143, in construct_mapping
value = self.construct_object(value_node, deep=deep)
File "/usr/lib/python3.8/site-packages/yaml/constructor.py", line 100, in construct_object
data = constructor(self, node)
File "/usr/lib/python3.8/site-packages/yaml/constructor.py", line 427, in construct_undefined
raise ConstructorError(None, None,
yaml.constructor.ConstructorError: could not determine a constructor for the tag 'tag:yaml.org,2002:4'
in "<unicode string>", line 5, column 8:
d: !!4
^
>>>
See: https://github.com/readthedocs/readthedocs.org/issues/6889
The cause of the problem is obvious after the fact, but I'd like to share the not-too-obvious cause here.
When running code such as
import jinja2
templateLoader = jinja2.FileSystemLoader(searchpath=".")
templateEnv = jinja2.Environment(loader=templateLoader,
trim_blocks=True,
lstrip_blocks=True)
htmlTemplateFile = 'file.jinja.html'
htmlTemplate = templateEnv.get_template(htmlTemplateFile)
if you get this problem:
Traceback (most recent call last):
...
File "file.py", line xyz, in some_func
htmlTemplate = templateEnv.get_template(htmlTemplateFile)
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/jinja2/environment.py", line 812, in get_template
return self._load_template(name, self.make_globals(globals))
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/jinja2/environment.py", line 774, in _load_template
cache_key = self.loader.get_source(self, name)[1]
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/jinja2/loaders.py", line 187, in get_source
raise TemplateNotFound(template)
jinja2.exceptions.TemplateNotFound: file.jinja.html
you may find the discussions online point that this issue must have something to do with the interaction of jinja2 with flask, with GAE, with Pyramid, or with SQL, and it may indeed be that your templates are not in a "template" folder, but this problem can arise from the interaction of jinja2 and the os module.
The culprit is changing the current directory by, for instance,
import os
os.chdir(someDir)
If templateEnv.get_template(...) is called past this point, jinja2 will look for the templates in the "current" dir, even if that has changed.
Since module os provides os.chdir but not os.pushdir/os.popdir, one has to either simulate the latter pair or avoid chdir altogether.
I downloaded nltk tagset help is not working.
Whenever I try to access tagset meanings by:-
nltk.help.upenn_tagset('NN')
I get result as :-
Traceback (most recent call last):
File "<pyshell#30>", line 1, in <module>
nltk.help.upenn_tagset('NN')
File "C:\Python34\lib\site-packages\nltk\help.py", line 25, in upenn_tagset
_format_tagset("upenn_tagset", tagpattern)
File "C:\Python34\lib\site-packages\nltk\help.py", line 39, in _format_tagset
tagdict = load("help/tagsets/" + tagset + ".pickle")
File "C:\Python34\lib\site-packages\nltk\data.py", line 774, in load
opened_resource = _open(resource_url)
File "C:\Python34\lib\site-packages\nltk\data.py", line 888, in _open
return find(path_, path + ['']).open()
File "C:\Python34\lib\site-packages\nltk\data.py", line 618, in find
raise LookupError(resource_not_found)
LookupError:
**********************************************************************
Resource 'help/tagsets/upenn_tagset.pickle' not found. Please
use the NLTK Downloader to obtain the resource: >>>
nltk.download()
Searched in:
- 'C:\\Users\\aarushi/nltk_data'
- 'C:\\nltk_data'
- 'D:\\nltk_data'
- 'E:\\nltk_data'
- 'C:\\Python34\\nltk_data'
- 'C:\\Python34\\lib\\nltk_data'
- 'C:\\Users\\aarushi\\AppData\\Roaming\\nltk_data'
But I already downloaded tagset from models tab by nltk.download()
So what am I doing wrong here?
As nltk is telling you, it searched for the file help/tagsets/upenn_tagset.pickle in the directories:
- 'C:\\Users\\aarushi/nltk_data'
- 'C:\\nltk_data'
- 'D:\\nltk_data'
- 'E:\\nltk_data'
- 'C:\\Python34\\nltk_data'
- 'C:\\Python34\\lib\\nltk_data'
- 'C:\\Users\\aarushi\\AppData\\Roaming\\nltk_data'
And could not find it.
Is it there?
If not, use nltk.download() to get it, and make sure it's in one of these directories.
https://i.stack.imgur.com/Ri7E2.png "Reference"
Hi ,
Check for above tagged image to summarize what exactly needs to be downloaded.
One of the fastest and easiest way to resolve this issue by downloading the help_tagsets from nltk.download() .
One can follow below steps:-
open your jupyter notebook or python shell in your OS(thru terminal).
ask for nltk.download() - (shell/jupyter) - it will open the GUI.
search the help_datasets in ALL PACKAGES column.
Just download it :-) Here you go .