Generation of UUID in yaml liquibase script - mysql

I want to convert my liquibase script from this OLD to NEW format. But in new format the uuid_in(md5(random()::text || clock_timestamp()::text)::cstring) function is not working, and it is taking uuid generator as a string. Any ways to solve this?
OLD-
changeSet:
id: fulfillment-seed-data-1
author: sas
preConditions:
onFail: MARK_RAN
sqlCheck:
expectedResult: 0
sql: select count(*) from ${schema}.global_setting;
changes:
- sql:
dbms: PostgreSQL
splitStatements: true
stripComments: true
sql: INSERT INTO ${schema}.global_setting (global_setting_id, spec_nm, app_nm, spec_value_txt, spec_desc) VALUES(uuid_in(md5(random()::text || clock_timestamp()::text)::cstring), 'PROD_DIMENSION_TYPE_ID', 'FULFILLMENT', '', '');
NEW-
changeSet:
id: fulfillment-seed-data-1
author: sas
preConditions:
- dbms:
type: PostgreSQL
- onFail: MARK_RAN
changes:
- insert:
columns:
- column:
name: global_setting_id
value:
- uuid_in(md5(random()::text || clock_timestamp()::text)::cstring)
- column:
name: spec_nm
value: PROD_DIMENSION_TYPE_ID
- column:
name: app_nm
value: FULFILLMENT
- column:
name: spec_value_txt
value:
- column:
name: spec_desc
value:
tableName: global_setting

Can you use valueComputed for the function call that you are trying to use to compute the value for the column?
https://docs.liquibase.com/concepts/changelogs/attributes/column.html
In the old case you are using straight sql to make your update.
In the new format you are modeling the changes so you need to tell Liquibase to execute that function/sp instead to populate the column value.

Related

How to store the variable key and value that has been extracted from json into another variable with same format in azure pipeline?

I have a variable template
var1.yml
variables:
- name: TEST_DB_HOSTNAME
value: 10.123.56.222
- name: TEST_DB_PORTNUMBER
value: 1521
- name: TEST_USERNAME
value: TEST
- name: TEST_PASSWORD
value: TEST
- name: TEST_SCHEMANAME
value: SCHEMA
- name: TEST_ACTIVEMQNAME
value: 10.123.56.223
- name: TEST_ACTIVEMQPORT
value: 8161
When I run the below pipeline
resources:
repositories:
- repository: templates
type: git
name: pipeline_templates
ref: refs/heads/master
trigger:
- none
variables:
- template: templates/var1.yml#templates
pool:
name: PoolA
steps:
- pwsh: |
Write-Host "${{ convertToJson(variables) }}"
I get the output
{
build.sourceBranchName: master,
build.reason: Manual,
system.pullRequest.isFork: False,
system.jobParallelismTag: Public,
system.enableAccessToken: SecretVariable,
TEST_DB_HOSTNAME: 10.123.56.222,
TEST_DB_PORTNUMBER: 1521,
TEST_USERNAME: TEST,
TEST_PASSWORD: TEST,
TEST_SCHEMANAME: SCHEMA,
TEST_ACTIVEMQNAME: 10.123.56.223,
TEST_ACTIVEMQPORT: 8161
}
How can I modify the pipeline to extract only the key value from the result set that starts with "Test_" and store into another variable in the same format so that I could be used in other tasks in the same pipeline ?
OR iterate through the objects that has keys "Test_" and get the value for the same ?
The output you have shown is invalid JSON and cannot be transformed with JSON. Assuming that it were valid JSON:
{
"build.sourceBranchName": "master",
"build.reason": "Manual",
"system.pullRequest.isFork": "False",
"system.jobParallelismTag": "Public",
"system.enableAccessToken": "SecretVariable",
"TEST_DB_HOSTNAME": "10.123.56.222",
"TEST_DB_PORTNUMBER": 1521,
"TEST_USERNAME": "TEST",
"TEST_PASSWORD": "TEST",
"TEST_SCHEMANAME": "SCHEMA",
"TEST_ACTIVEMQNAME": "10.123.56.223",
"TEST_ACTIVEMQPORT": 8161
}
then you can use the to_entries or with_entries filters of jq to get an object containing only those keys which start with "TEST_":
with_entries(select(.key|startswith("TEST_")))
This will give you a new object as output:
{
"TEST_DB_HOSTNAME": "10.123.56.222",
"TEST_DB_PORTNUMBER": 1521,
"TEST_USERNAME": "TEST",
"TEST_PASSWORD": "TEST",
"TEST_SCHEMANAME": "SCHEMA",
"TEST_ACTIVEMQNAME": "10.123.56.223",
"TEST_ACTIVEMQPORT": 8161
}
The convertToJson() function is a bit messy, as the "json" it creates is not, in fact, a valid json.
There are several possible approaches I can think of:
Use convertToJson() to pass the non-valid json to a script-step, convert it to a valid json and then extract the relevant values. I have done this before and it typically works, if you have control over the data in the variables. The downside is that there is risk that the conversion to valid json can fail.
Create a yaml-loop that iterates the variables and extract the ones that begins with Test_. You can find examples of how to write a loop here, but basically, it would look like this:
- stage:
variables:
firstVar: 1
secondVar: 2
Test_thirdVar: 3
Test_forthVar: 4
jobs:
- job: loopVars
steps:
- ${{ each var in variables }}:
- script: |
echo ${{ var.key }}
echo ${{ var.value }}
displayName: handling ${{ var.key }}
If applicable to your use case, you can create complex parameters (instead of variables) for only the Test_ variables. Using this, you could use the relevant values directly and would not need to extract a subset from your variable list. Note however, that parameters are inputs to a pipeline and can be adjusted before execution. Example:
parameters:
- name: non-test-variables
type: object
default:
firstVar: 1
secondVar: 2
- name: test-variables
type: object
default:
Test_thirdVar: 3
Test_forthVar: 4
You can use these by referencing ${{ parameters.Test_thirdVar }} in the pipeline.

ansible parsing json into include_tasks loop

I have the following json file called cust.json :
{
"customer":{
"CUST1":{
"zone":"ZONE1",
"site":"ASIA"
},
"CUST2":{
"zone":"ZONE2",
"site":"EUROPE"
}
}
}
I am using this json file in my main.yml to get a list of customers (CUST1 and CUST2).
main.yml:
- name: Include the vars
include_vars:
file: "{{ playbook_dir }}/../default_vars/cust.json"
name: "cust_json"
- name: Generate customer config
include_tasks: create_config.yml
loop: "{{ cust_json.customer }}"
I was hoping the loop will basically pass each customer's code (eg CUST1) to create_config.yml, so that something like the following can happen:
create_config.yml:
- name: Create customer config
block:
- name: create temporary file for customer
tempfile:
path: "/tmp"
state: file
prefix: "my customerconfig_{{ item }}."
suffix: ".tgz"
register: tempfile
- name: Setup other things
include_tasks: "othercustconfigs.yml"
Which will result in :
The following files being generated : /tmp/mycustomerconfig_CUST1 and /tmp/mycustomerconfig_CUST2
The tasks within othercustconfigs.yml be run for CUST1 and CUST2.
Questions :
Running the ansible, it fails at this point:
TASK [myrole : Generate customer config ] ************************************************************************************************************************************************************
fatal: [127.0.0.1]: FAILED! => {
"msg": "Invalid data passed to 'loop', it requires a list, got this instead: {u'CUST1': {u'site': u'ASIA', u'zone': u'ZONE1'}, u'CUST2': {u'site': u'EUROPE', u'zone': uZONE2'}}. Hint: If you passed a list/dict of just one element, try adding wantlist=True to your lookup invocation or use q/query instead of lookup."
}
How do I loop the JSON so that it would get the list of customers (CUST1 and CUST2) correctly? loop: "{{ cust_json.customer }}" clearly doesnt work.
If I manage to get the above working, is it possible to pass the result of the loop to the next include_tasks: "othercustconfigs.yml ? SO basically, passing the looped items from main.yml , then to config.yml, and then to othercustconfigs.yml. Is this possible?
Thanks!!
J
cust_json.customer is a hashmap containing one key for each customer, not a list.
The dict2items filter can transform this hashmap into a list of elements each containing a key and value attribute, e.g:
- key: "CUST1"
value:
zone: "ZONE1"
site: "ASIA"
- key: "CUST2"
value:
zone: "ZONE2"
site: "EUROPE"
With this in mind, you can transform your include to the following:
- name: Generate customer config
include_tasks: create_config.yml
loop: "{{ cust_json.customer | dict2items }}"
and the relevant task in your included file to:
- name: create temporary file for customer
tempfile:
path: "/tmp"
state: file
prefix: "my customerconfig_{{ item.key }}."
suffix: ".tgz"
register: tempfile
Of course you can adapt all this to use the value element where needed, e.g. item.value.site
You can see the following documentations for in depth info and alternative solutions:
https://docs.ansible.com/ansible/latest/user_guide/playbooks_filters.html#dict-filter
https://docs.ansible.com/ansible/latest/user_guide/playbooks_loops.html#iterating-over-a-dictionary
https://docs.ansible.com/ansible/latest/user_guide/playbooks_loops.html#with-dict
https://jinja.palletsprojects.com/en/2.11.x/templates/#dictsort

How to fetch Array data in the key value format in MYSQL

Can anyone please tell me how can we fetch the data from the below array which is in the form of key value.I want the information in the such a way:
select owner,TTL,class,Type, description;
Table :
[
Owner: AWS00003Instance.domain.com.,
TTL: 1200,
Class: IN,
Type: A,
Data: 192.168.0.68,
Description:default A rec, ,
Owner: rr1.domain.com.,
TTL: 1200,
Class: IN,
Type: A,
Data: 192.168.0.68,
Description:test
]
Thanks
With plain MySQL from command line, the best you can get without much efford:
select owner,TTL,class,Type, description\G
*************************** 1. row ***************************
owner: AWS00003Instance.domain.com.
TTL: 1200
class: IN
Type: A
Description:default A rec
...

Drupal 8 custom migration with source csv

How can I import fields into Drupal 8 user profile from a csv file that has identical values in the key row?
I'm using Migrate Tools, Migrate Plus and Migrate Source CSV. My CSV file looks like that:
"Stg";"Color";"Fruit"
"user1";"red";"apple"
"user1";"blue";"pear"
"user2";"green";"banana"
"user2";"black";"rotten banana"
I'm using the Profile Module (https://www.drupal.org/project/profile)
I have a migration within my migration group that looks like this (migrate_plus.migration.user_vorlesungen.yml):
id: user_vorlesungen
langcode: de
status: true
dependencies:
enforced:
module:
- user_migrate
migration_group: hoevwa
label: 'HoeVWA Vorlesungen Import'
source:
plugin: csv
track_changes: true
path: /config/Vorlesungsverzeichnis.csv
# Column delimiter. Comma (,) by default.
delimiter: ';'
# Field enclosure. Double quotation marks (") by default.
enclosure: '"'
header_row_count: 1
keys:
- Stg
destination:
plugin: entity:profile
process:
type:
plugin: default_value
default_value: 'vorlesungen'
uid:
plugin: migration_lookup
no_stub: true
# previous user migration
migration: user__hoerer
# property in the source data
source: Stg
# These field have multiple values in D8
field_color: Color
field_fruit: Fruit
migration_dependencies:
required: { }
optional: { }
In my YAML file the content is printed like that:
...
<table>
<tr>
<td>{{ content.field_color }}</td>
<td>{{ content.field_fruit }}</td>
</tr>
</table>
...
When I run drush mim --group=hoevwa only the last values of user1 (blue, pear) are imported. How can I get running a process plugin to loop through the CSV and get all values imported. And finally how can I loop through all values in my TWIG Template?

How can I describe this JSON object in swagger parameters?

I've looked at a few other related questions and I still can't seem to find what I'm looking for. This is an example JSON payload being sent to an API I'm writing:
{
"publishType": "Permitable",
"electricalPanelCapacity": 0.0,
"roofConstruction": "Standard/Pitched",
"roofType": "Composition Shingle",
"systemConstraint": "None",
"addedCapacity": 0.0,
"isElectricalUpgradeRequired": false,
"cadCompletedBy": "94039",
"cadCompletedDate": "2017-02-01T02:18:15Z",
"totalSunhourDeficit": 5.0,
"designedSavings": 5.0,
"isDesignedWithinTolerance": "N/A",
"energyProduction": {
"january": 322.40753170051255,
"february": 480.61501312589826,
"march": 695.35215022905118,
"april": 664.506907341219,
"may": 877.53769491124172,
"june": 785.56924358327,
"july": 782.64347308783363,
"august": 760.1123565793057,
"september": 574.67050827435878,
"october": 524.53797441350321,
"november": 324.31132291046379,
"december": 280.46921069200033
},
"roofSections": [{
"name": "North East Roof 4",
"roofType": "Composition Shingle",
"azimuth": 55.678664773137086,
"tilt": 15.0,
"solmetricEstimate": 510.42831656979456,
"shadingLoss": 14.0,
"systemRating": 580.0,
"sunHours": 0.88004882167205956,
"moduleCount": 2,
"modules": [{
"moduleRating": 290.0,
"isovaPartNumber": "CDS-MON-007070",
"partCount": 2
}]
}, {
"name": "South West Roof 3",
"roofType": "Composition Shingle",
"azimuth": 235.67866481720722,
"tilt": 38.0,
"solmetricEstimate": 3649.1643776261653,
"shadingLoss": 59.0,
"systemRating": 5220.0,
"sunHours": 0.69907363556056812,
"moduleCount": 18,
"modules": [{
"moduleRating": 290.0,
"isovaPartNumber": "CDS-MON-007070",
"partCount": 18
}]
}, {
"name": "South East Roof",
"roofType": "Composition Shingle",
"azimuth": 145.67866477313709,
"tilt": 19.0,
"solmetricEstimate": 2913.1406926526984,
"shadingLoss": 31.0,
"systemRating": 2900.0,
"sunHours": 1.0045312733285168,
"moduleCount": 10,
"modules": [{
"moduleRating": 290.0,
"isovaPartNumber": "CDS-MON-007070",
"partCount": 10
}]
}],
"SystemConfiguration": {
"inverters": [{
"isovaPartNumber": "ENP-INV-007182",
"partCount": 30
}]
}
}
Describing all the beginning parameters was easy.
/post/new-cad/{serviceNumber}:
post:
summary: Publish a new CAD record.
description: Creates a new CAD record under the provided service number and returns the name of the new CAD record, the unique SF ID, and the deep link URL for Salesforce.
parameters:
- name: serviceNumber
in: path
description: The service number for the solar project you're interested in publishing to.
required: true
type: string
- name: publishType
in: formData
description: The type of CAD record to publish (Proposal, Permitable, or AsBuilt).
required: true
type: string
- name: electricalPanelCapacity
in: formData
required: true
type: number
format: double
- name: roofConstruction
in: formData
description: New, Flat Roof, Open Beam, Standard/Pitched
required: true
type: string
- name: roofType
in: formData
description: Composition Shingle, Membrane (Rubber, TPO, PVC, EPDM), Metal - Corrugated (S-Curve), Metal - Standing Seam, Metal - Trapezoidal, Multi Roof Type, Rolled Comp, Silicone, Tar & Gravel, Tile - Flat, Tile - S-Curve, or Tile - W-Curve
type: string
- name: systemConstraint
in: formData
description: Usage, None, Roof, Electrical, Shading, or 10kW Max
required: true
type: string
- name: addedCapacity
in: formData
required: true
type: number
format: double
- name: isElectricalUpgradeRequired
in: formData
type: boolean
- name: cadCompletedBy
in: formData
description: Employee ID of record author.
type: number
required: true
- name: cadCompletedDate
in: formData
description: The date the CAD record was completed.
type: string
format: date
required: true
- name: totalSunhourDeficit
in: formData
type: number
format: double
- name: designedSavings
in: formData
type: number
format: double
- name: isDesignedWithinTolerance
in: formData
type: string
description: Yes, No, or N/A
And yields the expected result in Swagger-UI:
But now I'm struggling with the last parts of the example JSON payload above. I'm unsure how to express the energyProduction key which is an object with a key for each month of the year. I'm also unsure how to describe roofSections which is an array of objects and systemConfiguration which is an object with a property inverters whose value is an array of objects.
I'm going over the swagger documentation quite a bit but I'm still pretty confused and hoping maybe someone here can explain things a little better to me.
I figured it out. Turns out formData is not what I should have been using for my parameters. Instead I needed to use body and define the structure of the JSON that would populate the body. Here is the completed design file using a body parameter with an object schema and describes all the nested objects and arrays as well.
/new-cad/{serviceNumber}:
post:
summary: Publish a new CAD record.
description: Creates a new CAD record under the provided service number and returns the name of the new CAD record, the unique SF ID, and the deep link URL for Salesforce.
parameters:
- name: serviceNumber
in: path
description: The service number for the solar project you're interested in publishing to.
required: true
type: string
- name: cadData
in: body
description: A JSON payload containing the data required to publish a new CAD record.
required: true
schema:
type: object
properties:
publishType:
type: string
default: "Proposal"
enum: ["Proposal","Permitable","AsBuilt"]
electricalPanelCapacity:
type: number
roofConstruction:
type: string
default: "New"
enum: ["New","Flat Roof","Open Beam","Standard/Pitched"]
roofType:
type: string
enum: ["Composition Shingle","Membrane (Rubber, TPO, PVC, EPDM)","Metal - Corrugated (S-Curve)","Metal - Standing Seam","Metal - Trapezoidal","Multi Roof Type","Rolled Comp","Silicone","Tar & Gravel","Tile - Flat","Tile - S-Curve","Tile - W-Curve"]
systemConstraint:
type: string
default: "None"
enum: ["None","Usage","Roof","Electrical","Shading","10kW Max"]
addedCapacity:
type: number
default: 0
isElectricalUpgradeRequired:
type: boolean
cadCompletedBy:
type: string
cadCompletedDate:
type: string
totalSunhourDeficit:
type: number
designedSavings:
type: number
isDesignedWithinTolerance:
type: string
default: "N/A"
enum: ["N/A","Yes","No"]
energyProduction:
type: object
properties:
january:
type: number
february:
type: number
march:
type: number
april:
type: number
may:
type: number
june:
type: number
july:
type: number
august:
type: number
september:
type: number
october:
type: number
november:
type: number
december:
type: number
roofSections:
type: array
items:
type: object
properties:
name:
type: string
roofType:
type: string
enum: ["Composition Shingle","Membrane (Rubber, TPO, PVC, EPDM)","Metal - Corrugated (S-Curve)","Metal - Standing Seam","Metal - Trapezoidal","Multi Roof Type","Rolled Comp","Silicone","Tar & Gravel","Tile - Flat","Tile - S-Curve","Tile - W-Curve"]
azimuth:
type: number
tilt:
type: number
solmetricEstimate:
type: number
shadingLoss:
type: number
systemRating:
type: number
sunHours:
type: number
moduleCount:
type: integer
modules:
type: array
items:
type: object
properties:
moduleRating:
type: number
isovaPartNumber:
type: string
partCount:
type: integer
systemConfiguration:
type: object
properties:
inverters:
type: array
items:
type: object
properties:
isovaPartNumber:
type: string
partCount:
type: integer
tags:
- NEW-CAD
responses:
200:
description: CAD record created successfully.
schema:
type: object
properties:
cadName:
type: string
sfId:
type: string
sfUrl:
type: string
examples:
cadName: some name
sfId: a1o4c0000000GGAQA2
sfUrl: http://some-url-to-nowhere.com
204:
description: No project could be found for the given service number.
500:
description: Unexpected error. Most likely while communicating with Salesforce.
schema:
type: string
So now I can still get the serviceNumber from the path while everything else comes in the request body. One thing to keep in mind here is that you cannot use all the same Swagger Data Types. For example I tried to use double for one of the properties and Swagger complained that it couldn't parse type double. I was very confused until I finally found the section of the docs describing the difference between formData parameters and a body parameter (of which you can only have one, because it describes the entire request body). Basically you can only use data types that are supported by the JSON schema.
Swagger-UI now shows a single textarea instead of multiple input fields for each parameter. Not as pretty but it works great. You can click the "Example Value" box on the right and it places a predefined JSON template in the textarea for you so you can just fill in the values.
If you are just learning Swagger like I am I hope this helps!