Error when executing this Ansible/Powershell - json

We would like to execute a package comparer to compare Chocolatey packages from the local repository with the public repository. Now when we execute the playbook, the copy of the script goes well but the execution of the script goes good too but then fails for some reason. Maybe one of you could help me with this?
tasks:
- name: Copy powershell file to Chocolatey server
win_copy:
src: ../powershell/package_comparer.ps1
dest: C:\Temp\
- name: Executing Powershell script
win_shell: C:\Temp\package_comparer.ps1
changed_when: false
register: result
- name: parse .json file
set_fact:
packages_result: "{{(result.stdout | from_json)}}"
- debug:
msg: "{{ packages_results }}"
That we can execute this without error.. As I get now this error:
[WARNING]: Failure using method (v2_runner_on_failed) in callback plugin (<ansible.plugins.callback.yaml.CallbackModule object at 0x7fecd4ee8910>): value must be a string
Callback Exception:
File "/usr/lib/python2.7/site-packages/ansible/executor/task_queue_manager.py", line 333, in send_callback
method(*new_args, **kwargs)
File "/usr/lib/python2.7/site-packages/ansible/plugins/callback/default.py", line 93, in v2_runner_on_failed
self._display.display("fatal: [%s]: FAILED! => %s" % (result._host.get_name(), self._dump_results(result._result)),
File "/usr/lib/python2.7/site-packages/ansible/plugins/callback/yaml.py", line 123, in _dump_results
dumped += to_text(yaml.dump(abridged_result, allow_unicode=True, width=1000, Dumper=AnsibleDumper, default_flow_style=False))
File "/usr/lib64/python2.7/site-packages/yaml/__init__.py", line 293, in dump
return dump_all([data], stream, Dumper=Dumper, **kwds)
File "/usr/lib64/python2.7/site-packages/yaml/__init__.py", line 281, in dump_all
dumper.represent(data)
File "/usr/lib64/python2.7/site-packages/yaml/representer.py", line 30, in represent
self.serialize(node)
File "_yaml.pyx", line 1348, in _yaml.CEmitter.serialize (ext/_yaml.c:15963)
File "_yaml.pyx", line 1510, in _yaml.CEmitter._serialize_node (ext/_yaml.c:18037)
File "_yaml.pyx", line 1431, in _yaml.CEmitter._serialize_node (ext/_yaml.c:17021)
PLAY RECAP **********************************************************************************************************************************************************************************************
host_machine : ok=1 changed=1 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0

Related

Publish Python Package via GitHub Actions to AWS CodeArtifact

I have a hard time to publish a package to AWS CodeArtifact. Problem is the authentification.
First I tried to execute the login via the aws cli but due to the lack of the .pypirc file containing the repository settings that didn't work out. Now I tried to store the token and feed it into the --repository-url but in both cases I end up that the process wants a username anyway.
Stacktrace:
File "/opt/hostedtoolcache/Python/3.9.1/x64/bin/twine", line 8, in <module>
sys.exit(main())
File "/opt/hostedtoolcache/Python/3.9.1/x64/lib/python3.9/site-packages/twine/__main__.py", line 28, in main
result = cli.dispatch(sys.argv[1:])
File "/opt/hostedtoolcache/Python/3.9.1/x64/lib/python3.9/site-packages/twine/cli.py", line 82, in dispatch
return main(args.args)
File "/opt/hostedtoolcache/Python/3.9.1/x64/lib/python3.9/site-packages/twine/commands/upload.py", line 154, in main
return upload(upload_settings, parsed_args.dists)
File "/opt/hostedtoolcache/Python/3.9.1/x64/lib/python3.9/site-packages/twine/commands/upload.py", line 91, in upload
repository = upload_settings.create_repository()
File "/opt/hostedtoolcache/Python/3.9.1/x64/lib/python3.9/site-packages/twine/settings.py", line 345, in create_repository
self.username,
File "/opt/hostedtoolcache/Python/3.9.1/x64/lib/python3.9/site-packages/twine/settings.py", line 146, in username
return cast(Optional[str], self.auth.username)
File "/opt/hostedtoolcache/Python/3.9.1/x64/lib/python3.9/site-packages/twine/auth.py", line 35, in username
return utils.get_userpass_value(
File "/opt/hostedtoolcache/Python/3.9.1/x64/lib/python3.9/site-packages/twine/utils.py", line 241, in get_userpass_value
return prompt_strategy()
File "/opt/hostedtoolcache/Python/3.9.1/x64/lib/python3.9/site-packages/twine/auth.py", line 81, in username_from_keyring_or_prompt
return self.prompt("username", input)
File "/opt/hostedtoolcache/Python/3.9.1/x64/lib/python3.9/site-packages/twine/auth.py", line 92, in prompt
return how(f"Enter your {what}: ")
EOFError: EOF when reading a line
Enter your username:
Error: Process completed with exit code 1.
Partial github-action.yml:
steps:
- uses: actions/checkout#v2
- name: Set up Python
uses: actions/setup-python#v2
with:
python-version: '3.x'
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install setuptools wheel twine
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_CA_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_CA_SECRET_ACCESS_KEY }}
aws-region: eu-central-1
- name: Build and publish
run: |
token=$(aws codeartifact get-authorization-token --domain foobar --domain-owner 123456678901 --query authorizationToken --output text)
python setup.py sdist bdist_wheel
twine upload --repository-url https://aws:$token#foobar-123456678901.d.codeartifact.eu-central-1.amazonaws.com/pypi/my-repo/simple dist/*
You need to pass to twine the correct authentication values, try with the following:
twine upload --repository-url https://foobar-123456678901.d.codeartifact.eu-central-1.amazonaws.com/pypi/my-repo/simple --username aws --password $token dist/*
see: https://twine.readthedocs.io/en/latest/#commands
The AWS CLI lets you configure credentials for twine so you don't have to pass them explicitly.
- name: Build and publish
run: |
aws codeartifact login --tool twine --domain foobar --repository my-repo
python setup.py sdist bdist_wheel
twine upload --repository codeartifact dist/*
Links:
https://docs.aws.amazon.com/codeartifact/latest/ug/python-configure.html
https://docs.aws.amazon.com/codeartifact/latest/ug/python-run-twine.html

Google Cloud Function error "OperationError: code=3, message=Function failed on loading user code"

I get an error from time to time when deploying nodejs10 cloud functions to GCP. The error seems to go away on it's own, I just redeploy the same thing a few times. Anyone know what causes it? He's the log:
command: gcloud beta functions deploy exchangeIcon --verbosity debug --runtime nodejs10 --memory 128 --region europe-west1 --timeout 5 --trigger-http --set-env-vars=FUNCTION_REGION=europe-west1,BUILD_DATE=2019-05-09T10:01:05.497Z --entry-point app
DEBUG: Running [gcloud.beta.functions.deploy] with arguments: [--entry-point: "app", --memory: "134217728", --region: "europe-west1", --runtime: "nodejs10", --set-env-vars: "OrderedDict([(u'FUNCTION_REGION', u'europe-west1'), (u'BUILD_DATE', u'2019-05-09T10:01:05.497Z')])", --timeout: "5", --trigger-http: "True", --verbosity: "debug", NAME: "exchangeIcon"]
INFO: Not using a .gcloudignore file.
INFO: Not using a .gcloudignore file.
Deploying function (may take a while - up to 2 minutes)...
..........................................................................failed.
DEBUG: (gcloud.beta.functions.deploy) OperationError: code=3, message=Function failed on loading user code. Error message:
Traceback (most recent call last):
File "/Users/me/Downloads/google-cloud-sdk/lib/googlecloudsdk/calliope/cli.py", line 985, in Execute
resources = calliope_command.Run(cli=self, args=args)
File "/Users/me/Downloads/google-cloud-sdk/lib/googlecloudsdk/calliope/backend.py", line 795, in Run
resources = command_instance.Run(args)
File "/Users/me/Downloads/google-cloud-sdk/lib/surface/functions/deploy.py", line 231, in Run
enable_vpc_connector=True)
File "/Users/me/Downloads/google-cloud-sdk/lib/surface/functions/deploy.py", line 175, in _Run
return api_util.PatchFunction(function, updated_fields)
File "/Users/me/Downloads/google-cloud-sdk/lib/googlecloudsdk/api_lib/functions/util.py", line 300, in CatchHTTPErrorRaiseHTTPExceptionFn
return func(*args, **kwargs)
File "/Users/me/Downloads/google-cloud-sdk/lib/googlecloudsdk/api_lib/functions/util.py", line 356, in PatchFunction
operations.Wait(op, messages, client, _DEPLOY_WAIT_NOTICE)
File "/Users/me/Downloads/google-cloud-sdk/lib/googlecloudsdk/api_lib/functions/operations.py", line 126, in Wait
_WaitForOperation(client, request, notice)
File "/Users/me/Downloads/google-cloud-sdk/lib/googlecloudsdk/api_lib/functions/operations.py", line 101, in _WaitForOperation
sleep_ms=SLEEP_MS)
File "/Users/me/Downloads/google-cloud-sdk/lib/googlecloudsdk/core/util/retry.py", line 219, in RetryOnResult
result = func(*args, **kwargs)
File "/Users/me/Downloads/google-cloud-sdk/lib/googlecloudsdk/api_lib/functions/operations.py", line 65, in _GetOperationStatus
raise exceptions.FunctionsError(OperationErrorToString(op.error))
FunctionsError: OperationError: code=3, message=Function failed on loading user code. Error message:
ERROR: (gcloud.beta.functions.deploy) OperationError: code=3, message=Function failed on loading user code.
In my Stackdriver Logging I just see INVALID_ARGUMENT, but nothing else.
The problem stems from your terminal commands not being properly formatted.
--verbosity=debug
is the proper way to type this. Also same thing with your runtime.

Airflow: Cannot assign requested address error while using emailoperator

Unable to receive email on task failure or even using EmailOperator
Hi Guys,
I am unable to receive email from my box even after adding required parameters to send one.
Below is how my default args looks like --
default_args = {
'owner': 'phonrao',
'depends_on_past': False,
#'start_date': datetime(2019, 3, 28),
'start_date': airflow.utils.dates.days_ago(2),
'email': ['phonrao#gmail.com'],
'email_on_failure': True,
'email_on_retry': True,
'retries': 1,
'retry_delay': timedelta(minutes=5),
#'on_failure_callback': report_failure,
#'end_date': datetime(2020,4 ,1),
#'schedule_interval': '#hourly',
}
I have few HttpsOperator task in between -- those are working good and are a success, but they do not send email on error(I purposely tried to introduce an error to check if they send any email). Below is an example of my task.
t1 = SimpleHttpOperator(
task_id='t1',
http_conn_id='http_waterfall',
endpoint='/update_data',
method='POST',
headers={"Content-Type":"application/json"},
xcom_push=True,
log_response=True,
dag=dag,
)
and this is my EmailOperator task
t2 = EmailOperator(
dag=dag,
task_id="send_email",
to='phonrao#gmail.com',
subject='Success',
html_content="<h3>Success</h3>"
)
t2 >> t1
Below is the error from Logs:
[2019-04-02 15:28:21,305] {{base_task_runner.py:101}} INFO - Job 845: Subtask send_email [2019-04-02 15:28:21,305] {{cli.py:520}} INFO - Running <TaskInstance: schedulerDAG.send_email 2019-04-02T15:23:08.896589+00:00 [running]> on host a47cd79aa987
[2019-04-02 15:28:21,343] {{logging_mixin.py:95}} INFO - [2019-04-02 15:28:21,343] {{configuration.py:255}} WARNING - section/key [smtp/smtp_user] not found in config
[2019-04-02 15:28:21,343] {{models.py:1788}} ERROR - [Errno 99] Cannot assign requested address
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/airflow/models.py", line 1657, in _run_raw_task
result = task_copy.execute(context=context)
File "/usr/local/lib/python3.6/site-packages/airflow/operators/email_operator.py", line 78, in execute
mime_subtype=self.mime_subtype, mime_charset=self.mime_charset)
File "/usr/local/lib/python3.6/site-packages/airflow/utils/email.py", line 55, in send_email
mime_subtype=mime_subtype, mime_charset=mime_charset, **kwargs)
File "/usr/local/lib/python3.6/site-packages/airflow/utils/email.py", line 101, in send_email_smtp
send_MIME_email(smtp_mail_from, recipients, msg, dryrun)
File "/usr/local/lib/python3.6/site-packages/airflow/utils/email.py", line 121, in send_MIME_email
s = smtplib.SMTP_SSL(SMTP_HOST, SMTP_PORT) if SMTP_SSL else smtplib.SMTP(SMTP_HOST, SMTP_PORT)
File "/usr/local/lib/python3.6/smtplib.py", line 251, in __init__
(code, msg) = self.connect(host, port)
File "/usr/local/lib/python3.6/smtplib.py", line 336, in connect
self.sock = self._get_socket(host, port, self.timeout)
File "/usr/local/lib/python3.6/smtplib.py", line 307, in _get_socket
self.source_address)
File "/usr/local/lib/python3.6/socket.py", line 724, in create_connection
raise err
File "/usr/local/lib/python3.6/socket.py", line 713, in create_connection
sock.connect(sa)
OSError: [Errno 99] Cannot assign requested address
[2019-04-02 15:28:21,351] {{models.py:1817}} INFO - All retries failed; marking task as FAILED
Below is my airflow.cfg
[email]
email_backend = airflow.utils.email.send_email_smtp
[smtp]
# If you want airflow to send emails on retries, failure, and you want to use
# the airflow.utils.email.send_email_smtp function, you have to configure an
# smtp server here
smtp_host = localhost
smtp_starttls = True
smtp_ssl = False
# Uncomment and set the user/pass settings if you want to use SMTP AUTH
# smtp_user = airflow
# smtp_password = airflow
smtp_port = 25
smtp_mail_from = airflow#example.com
Has anyone encounter this issue and any suggestions on how do I resolve this?
If your airflow running on Kubernetes (installed by helm chart), you should take a look in "airflow-worker-0" pod, and make sure the environment variable of SMTP_HOST or SMTP_USER ... available in the config. Simply debugging, access to the container of airflow-worker and then run python, then trying these commands to make sure it works correctly.
import airflow
airflow.utils.email.send_email('example#gmail.com', 'Airflow TEST HERE', 'This is airflow status success')
I have the same issues, by resolving the environment variable of SMTP. Now it works.

Installation of Indivo Database - Errors during Testing Indivo Backend Server

I am a Computer Engineering UG student, working on a research project on Explanation of Health Data. For my project, I am required to access the Indivo health database. I am running Ubuntu 14.10 in Oracle's VirtualBox, with my host laptop OS in Windows 8.1.
I completed all steps of installation as per the instructions here - http://docs.indivohealth.org/en/2.0/howtos/install-ubuntu.html. Out of the three options, I have installed MySQL for database.
But I am stuck while Testing backend server. I am always receiving two errors -
osboxes#osboxes:~/IndivoHDB/indivo_server$ python manage.py cleanup_old_tokens
osboxes#osboxes:~/IndivoHDB/indivo_server$ python manage.py test indivo
Creating test database for alias 'default'...
.................F............................................................................................
RUNNING INTEGRATION TESTS:
=============================================================================
Report:
.......... pass : Document Handling Test
.......... pass : Sharing
.......... pass : PHA Document Handling
.......... pass : PHAing record_app delete
.......... pass : PHAing app delete
.......... pass : AppSpecific
.......... pass : Document Metadata Test
.......... pass : OAuthing
.......... pass : Binary Document Test
.......... pass : Accounting
.......... pass : Record Shares
.......... pass : Messaging
.......... pass : Special Document Handling
.......... pass : Auditing
.......... pass : Document Processing Test
.......... pass : Security
=============================================================================
......................................................................................................................E.........................................
ERROR: test_get_smart_ontology (indivo.tests.api.smart_tests.SMARTInternalTests)
Traceback (most recent call last):
File "/home/osboxes/IndivoHDB/indivo_server/indivo/tests/api/smart_tests.py", line 11, in test_get_smart_ontology
response = self.client.get('/ontology')
File "/usr/local/lib/python2.7/dist-packages/django/test/client.py", line 439, in get
response = super(Client, self).get(path, data=data, **extra)
File "/usr/local/lib/python2.7/dist-packages/django/test/client.py", line 241, in get
return self.request(**r)
File "/usr/local/lib/python2.7/dist-packages/django/core/handlers/base.py", line 111, in get_response
response = callback(request, *callback_args, **callback_kwargs)
File "/home/osboxes/IndivoHDB/indivo_server/indivo/lib/utils.py", line 38, in call
return view_func(request, *args, **kwargs)
File "/home/osboxes/IndivoHDB/indivo_server/indivo/views/smart_container.py", line 19, in smart_ontology
ontology = urllib2.urlopen(url).read()
File "/usr/lib/python2.7/urllib2.py", line 127, in urlopen
return _opener.open(url, data, timeout)
File "/usr/lib/python2.7/urllib2.py", line 404, in open
response = self._open(req, data)
File "/usr/lib/python2.7/urllib2.py", line 422, in _open
'_open', req)
File "/usr/lib/python2.7/urllib2.py", line 382, in _call_chain
result = func(*args)
File "/usr/lib/python2.7/urllib2.py", line 1199, in http_open
return self.do_open(httplib.HTTPConnection, req)
File "/usr/lib/python2.7/urllib2.py", line 1169, in do_open
raise URLError(err)
URLError:
======================================================================
FAIL: test_get_connect_credentials (indivo.tests.api.accounts_tests.AccountInternalTests)
Traceback (most recent call last):
File "/home/osboxes/IndivoHDB/indivo_server/indivo/tests/api/accounts_tests.py", line 376, in test_get_connect_credentials
self.assertEqual(db_rt.expires_at, iso8601.parse_utc_date(data.findtext('ExpiresAt')))
AssertionError: datetime.datetime(2015, 6, 22, 5, 36, 32) != datetime.datetime(2015, 6, 22, 5, 36, 32, 977919)
Ran 270 tests in 301.989s
FAILED (failures=1, errors=1)
Destroying test database for alias 'default'...
Please help me, as I am totally naive at this.
For my setup also I have seen the same error and I have contacted their google group and found that we can silently ignore that error and go ahead and see the things actually works or not. Mine is working fine eventhough i came across that errors.

rabbitmq_parameter module json argument

I'm trying to use the rabbitmq_parameter ansible module to set a federation upstream set, while dynamically generating the set, with something like this:
- name: Set federation upstream set
rabbitmq_parameter:
component: federation-upstream-set
name: my-upstreams
vhost: my-vhost
value: "{{ my_upstream_set }}"
The variable my_upstream_set is defined in a separate host variable file, like so:
my_upstream_set: [{"upstream": "upstream1"}, {"upstream": "upstream2"}]
However, no matter how I generate the value argument, which must be json, (with or without quotes, with simple or double quotes, yaml or json formatted), I can't get this to work. I get either the task failing with "stderr: Error: JSON decoding error", or the following error:
failed: [myhost] => {"failed": true, "parsed": false}
invalid output was: Traceback (most recent call last):
File "<stdin>", line 1498, in <module>
File "<stdin>", line 142, in main
File "<stdin>", line 104, in set
File "<stdin>", line 88, in _exec
File "<stdin>", line 1351, in run_command
File "/usr/lib/python2.7/posixpath.py", line 261, in expanduser
if not path.startswith('~'):
AttributeError: 'list' object has no attribute 'startswith'
debug3: mux_client_read_packet: read header failed: Broken pipe
debug2: Received exit status from master 1
I've tried running the task with a hardcoded value (so, directly in the task file) and it works as expected, but I have no way of integrating variables into that. Any idea what I might be doing wrong here? Thanks!
my_upstream_set should be a JSON (string) in your case it is a list.