I am trying to find a file path within a larger string of event data. The file path and names vary in length for each event. Example:
Security ID: xxxx Account Name: xxxx Accound Domain: xxxx Object Name: c:\temp\MyFile.doc Handle ID: xxxx Resource Attributes: xxxx
I want the file and path only = c:\temp\MyFile.doc so I used Mid and instr to get the string between Object Name: and Handle ID. It works except it still leaves some characters after the file name. Below is my function.
Mid(event,Instr(event,"Object Name: ") + 13, InstrRev(event,"Handle ID:") - Len(mid(event,InstrRev(event,"Handle ID:"))))
Thank You
You could use Split twice:
Text = "Security ID: xxxx Account Name: xxxx Accound Domain: xxxx Object Name: c:\temp\MyFile.doc Handle ID: xxxx Resource Attributes: xxxx"
Path = Split(Split(Text, "Object Name: ")(1), " Handle ID:")(0)
Debug.Print Path
c:\temp\MyFile.doc
Related
I want to convert my liquibase script from this OLD to NEW format. But in new format the uuid_in(md5(random()::text || clock_timestamp()::text)::cstring) function is not working, and it is taking uuid generator as a string. Any ways to solve this?
OLD-
changeSet:
id: fulfillment-seed-data-1
author: sas
preConditions:
onFail: MARK_RAN
sqlCheck:
expectedResult: 0
sql: select count(*) from ${schema}.global_setting;
changes:
- sql:
dbms: PostgreSQL
splitStatements: true
stripComments: true
sql: INSERT INTO ${schema}.global_setting (global_setting_id, spec_nm, app_nm, spec_value_txt, spec_desc) VALUES(uuid_in(md5(random()::text || clock_timestamp()::text)::cstring), 'PROD_DIMENSION_TYPE_ID', 'FULFILLMENT', '', '');
NEW-
changeSet:
id: fulfillment-seed-data-1
author: sas
preConditions:
- dbms:
type: PostgreSQL
- onFail: MARK_RAN
changes:
- insert:
columns:
- column:
name: global_setting_id
value:
- uuid_in(md5(random()::text || clock_timestamp()::text)::cstring)
- column:
name: spec_nm
value: PROD_DIMENSION_TYPE_ID
- column:
name: app_nm
value: FULFILLMENT
- column:
name: spec_value_txt
value:
- column:
name: spec_desc
value:
tableName: global_setting
Can you use valueComputed for the function call that you are trying to use to compute the value for the column?
https://docs.liquibase.com/concepts/changelogs/attributes/column.html
In the old case you are using straight sql to make your update.
In the new format you are modeling the changes so you need to tell Liquibase to execute that function/sp instead to populate the column value.
Scenario:
I have inspec profile-A(10 controls), Profile-B(15 controls), Profile-C(5 controls)
Profile-A depends on Profile-B and Profile-C.
I have a file in Profile-A which I am prasing with inspec.profile.file('test.json') and executing the 10 controls in the same profile.
I have to pass the same file to Profile-B and Profile-C so that I can execute the other set of tests in each profile as part of the profile dependency
I am able to successfully parse the test.json file in profile-A as the file is in correct folder path
myjson = json(content: inspec.profile.file('test.json'))
puts myjson
I have followed the inspec documentation to set up the profile dependency and inputs to the dependant profiles.
https://docs.chef.io/inspec/inputs/
Issue:
Issue is that I am able to pass a single Input values (like string, array etc..) to the dependent profiles but not able to pass the entire json file so that it will parse and the controls will be executed.
I have tried the following in the profile metadata file
# ProfileB inspec.yml
name: profile-b
inputs:
- name: file1
- name: file2
# wrapper inspec.yml
name: profile-A
depends:
- name: profile-b
path: ../profile-b
inputs:
- name: file1
value: 'json(content: inspec.profile.file('test.json'))'
profile: profile-b
- name: file2
value: 'FILE.read('/path/to/test.json')'
profile: profile-b
Error:
when I try to load the file1 and file2 in profile-b with the following
jsonfile1 = input('file1')
jsonfile2 = input('file2')
puts jsonfile1
puts jsonfile2
error - no implicit conversion of nil to integer
Goal:
I should be able to pass the file from profile-A to profile-B or profile-C so that the respective dependent profile controls are execute.
I have a variable template
var1.yml
variables:
- name: TEST_DB_HOSTNAME
value: 10.123.56.222
- name: TEST_DB_PORTNUMBER
value: 1521
- name: TEST_USERNAME
value: TEST
- name: TEST_PASSWORD
value: TEST
- name: TEST_SCHEMANAME
value: SCHEMA
- name: TEST_ACTIVEMQNAME
value: 10.123.56.223
- name: TEST_ACTIVEMQPORT
value: 8161
When I run the below pipeline
resources:
repositories:
- repository: templates
type: git
name: pipeline_templates
ref: refs/heads/master
trigger:
- none
variables:
- template: templates/var1.yml#templates
pool:
name: PoolA
steps:
- pwsh: |
Write-Host "${{ convertToJson(variables) }}"
I get the output
{
build.sourceBranchName: master,
build.reason: Manual,
system.pullRequest.isFork: False,
system.jobParallelismTag: Public,
system.enableAccessToken: SecretVariable,
TEST_DB_HOSTNAME: 10.123.56.222,
TEST_DB_PORTNUMBER: 1521,
TEST_USERNAME: TEST,
TEST_PASSWORD: TEST,
TEST_SCHEMANAME: SCHEMA,
TEST_ACTIVEMQNAME: 10.123.56.223,
TEST_ACTIVEMQPORT: 8161
}
How can I modify the pipeline to extract only the key value from the result set that starts with "Test_" and store into another variable in the same format so that I could be used in other tasks in the same pipeline ?
OR iterate through the objects that has keys "Test_" and get the value for the same ?
The output you have shown is invalid JSON and cannot be transformed with JSON. Assuming that it were valid JSON:
{
"build.sourceBranchName": "master",
"build.reason": "Manual",
"system.pullRequest.isFork": "False",
"system.jobParallelismTag": "Public",
"system.enableAccessToken": "SecretVariable",
"TEST_DB_HOSTNAME": "10.123.56.222",
"TEST_DB_PORTNUMBER": 1521,
"TEST_USERNAME": "TEST",
"TEST_PASSWORD": "TEST",
"TEST_SCHEMANAME": "SCHEMA",
"TEST_ACTIVEMQNAME": "10.123.56.223",
"TEST_ACTIVEMQPORT": 8161
}
then you can use the to_entries or with_entries filters of jq to get an object containing only those keys which start with "TEST_":
with_entries(select(.key|startswith("TEST_")))
This will give you a new object as output:
{
"TEST_DB_HOSTNAME": "10.123.56.222",
"TEST_DB_PORTNUMBER": 1521,
"TEST_USERNAME": "TEST",
"TEST_PASSWORD": "TEST",
"TEST_SCHEMANAME": "SCHEMA",
"TEST_ACTIVEMQNAME": "10.123.56.223",
"TEST_ACTIVEMQPORT": 8161
}
The convertToJson() function is a bit messy, as the "json" it creates is not, in fact, a valid json.
There are several possible approaches I can think of:
Use convertToJson() to pass the non-valid json to a script-step, convert it to a valid json and then extract the relevant values. I have done this before and it typically works, if you have control over the data in the variables. The downside is that there is risk that the conversion to valid json can fail.
Create a yaml-loop that iterates the variables and extract the ones that begins with Test_. You can find examples of how to write a loop here, but basically, it would look like this:
- stage:
variables:
firstVar: 1
secondVar: 2
Test_thirdVar: 3
Test_forthVar: 4
jobs:
- job: loopVars
steps:
- ${{ each var in variables }}:
- script: |
echo ${{ var.key }}
echo ${{ var.value }}
displayName: handling ${{ var.key }}
If applicable to your use case, you can create complex parameters (instead of variables) for only the Test_ variables. Using this, you could use the relevant values directly and would not need to extract a subset from your variable list. Note however, that parameters are inputs to a pipeline and can be adjusted before execution. Example:
parameters:
- name: non-test-variables
type: object
default:
firstVar: 1
secondVar: 2
- name: test-variables
type: object
default:
Test_thirdVar: 3
Test_forthVar: 4
You can use these by referencing ${{ parameters.Test_thirdVar }} in the pipeline.
I have the following json file called cust.json :
{
"customer":{
"CUST1":{
"zone":"ZONE1",
"site":"ASIA"
},
"CUST2":{
"zone":"ZONE2",
"site":"EUROPE"
}
}
}
I am using this json file in my main.yml to get a list of customers (CUST1 and CUST2).
main.yml:
- name: Include the vars
include_vars:
file: "{{ playbook_dir }}/../default_vars/cust.json"
name: "cust_json"
- name: Generate customer config
include_tasks: create_config.yml
loop: "{{ cust_json.customer }}"
I was hoping the loop will basically pass each customer's code (eg CUST1) to create_config.yml, so that something like the following can happen:
create_config.yml:
- name: Create customer config
block:
- name: create temporary file for customer
tempfile:
path: "/tmp"
state: file
prefix: "my customerconfig_{{ item }}."
suffix: ".tgz"
register: tempfile
- name: Setup other things
include_tasks: "othercustconfigs.yml"
Which will result in :
The following files being generated : /tmp/mycustomerconfig_CUST1 and /tmp/mycustomerconfig_CUST2
The tasks within othercustconfigs.yml be run for CUST1 and CUST2.
Questions :
Running the ansible, it fails at this point:
TASK [myrole : Generate customer config ] ************************************************************************************************************************************************************
fatal: [127.0.0.1]: FAILED! => {
"msg": "Invalid data passed to 'loop', it requires a list, got this instead: {u'CUST1': {u'site': u'ASIA', u'zone': u'ZONE1'}, u'CUST2': {u'site': u'EUROPE', u'zone': uZONE2'}}. Hint: If you passed a list/dict of just one element, try adding wantlist=True to your lookup invocation or use q/query instead of lookup."
}
How do I loop the JSON so that it would get the list of customers (CUST1 and CUST2) correctly? loop: "{{ cust_json.customer }}" clearly doesnt work.
If I manage to get the above working, is it possible to pass the result of the loop to the next include_tasks: "othercustconfigs.yml ? SO basically, passing the looped items from main.yml , then to config.yml, and then to othercustconfigs.yml. Is this possible?
Thanks!!
J
cust_json.customer is a hashmap containing one key for each customer, not a list.
The dict2items filter can transform this hashmap into a list of elements each containing a key and value attribute, e.g:
- key: "CUST1"
value:
zone: "ZONE1"
site: "ASIA"
- key: "CUST2"
value:
zone: "ZONE2"
site: "EUROPE"
With this in mind, you can transform your include to the following:
- name: Generate customer config
include_tasks: create_config.yml
loop: "{{ cust_json.customer | dict2items }}"
and the relevant task in your included file to:
- name: create temporary file for customer
tempfile:
path: "/tmp"
state: file
prefix: "my customerconfig_{{ item.key }}."
suffix: ".tgz"
register: tempfile
Of course you can adapt all this to use the value element where needed, e.g. item.value.site
You can see the following documentations for in depth info and alternative solutions:
https://docs.ansible.com/ansible/latest/user_guide/playbooks_filters.html#dict-filter
https://docs.ansible.com/ansible/latest/user_guide/playbooks_loops.html#iterating-over-a-dictionary
https://docs.ansible.com/ansible/latest/user_guide/playbooks_loops.html#with-dict
https://jinja.palletsprojects.com/en/2.11.x/templates/#dictsort
I'm currently using Django 1.2.4 and MySQL 5.1 on Ubuntu 9.10. The model is:
# project/cream/models.py
class IceCream(models.Model):
name = models.CharField(max_length=100)
code = models.CharField(max_length=5)
def __unicode__(self):
return u'%s - %s' % (self.code, self.name)
The fixtures data in a project/cream/fixtures/data.yaml file is:
- model: cream.icecream
pk: 1
fields:
name: Strawberry
code: ST
- model: cream.icecream
pk: 2
fields:
name: Noir Chocolat
code: NO
From the project folder, I invoke the command:
python manage.py loaddata cream/fixtures/data.yaml
The data is successfully loaded in the database but they look like the following:
False - Noir Chocolat
ST - Strawberry
Notice how the first entry is False instead of NO. Does anyone know how to fix this issue in my fixtures?
NO is treated as False because PyYaml implicitly detects that as a boolean value, as seen in resolver.py. If you want it to be the actual string "NO", try putting it in quotes ("").