Karate does not display a response after a POST request with status 201 [duplicate] - json

This question already has an answer here:
Support passing from Scenario Outline to JSON file
(1 answer)
Closed 2 years ago.
I am struggling with the following test, which is usually pretty easy...
Feature: Testing Env Create Feature
Scenario Outline: Create works as intended
Given url "http://localhost:10000/api/envs"
And request {"name": <Name>,"gcpProjectName": <GcpProjectName>,"url": <Url>}
When method POST
Then status 201
And match response contains {"id": #string, "name": <Name>,"gcpProjectName": <GcpProjectName>,"url": <Url>}
Examples:
| Name | GcpProjectName | Url |
| tests | D-COO-ContinuousCollaboration | https://fake.com |
| approval | Q-COO-ContinuousCollaboration | https://fake.com |
| demo | P-COO-ContinuousCollaboration | https://fake.com |
| prod | P-COO-ContinuousCollaboration | https://fake.com |
I am supposed to get a response summarizing my POST request that I successfully get using curl, Postman or even Swagger, but it does not appear with Karate:
[failed features:
src.test.features.envtest.env-create: [1.1:13] env-create.feature:9 - path: $, actual: '', expected: '{"id":"#string","name":"tests","gcpProjectName":"D-COO-ContinuousCollaboration","url":"https://fake.com"}', reason: not a sub-string
Anyone knows what happens ?
Thanks for your help.

Just add quotes around string substitutions:
And request {"name": "<Name>", "gcpProjectName": "<GcpProjectName>", "url": "<Url>" }

Related

How solve issue TypeError: strings.slice(...).reduce is not a function?

I recently dicovered Gatsby and I want to use this template for my own website:
https://github.com/toboko/gatsby-starter-fine
When downloading it, manage to run it http://localhost:8000/ but I get this error which I can escape:
TypeError: strings.slice(...).reduce is not a function
I added my repository here so you can take a look too: https://github.com/melariza/gatsby-starter-fine
Could you take a look and help fix it?
Screenshot of the error:
enter image description here
Here's the error text:
TypeError: strings.slice(...).reduce is not a function
css
/Users/mga/Sites/gatsby-starter-fine/.cache/loading-indicator/style.js:5
2 |
3 | function css(strings, ...keys) {
4 | const lastIndex = strings.length - 1
> 5 | return (
6 | strings.slice(0, lastIndex).reduce((p, s, i) => p + s + keys[i], ``) +
7 | strings[lastIndex]
8 | )
View compiled
Style
/Users/mga/Sites/gatsby-starter-fine/.cache/loading-indicator/style.js:14
11 | const Style = () => (
12 | <style
13 | dangerouslySetInnerHTML={{
> 14 | __html: css`
15 | :host {
16 | --purple-60: #663399;
17 | --gatsby: var(--purple-60);
View compiled
▶ 18 stack frames were collapsed.
(anonymous function)
/Users/mga/Sites/gatsby-starter-fine/.cache/app.js:165
162 | dismissLoadingIndicator()
163 | }
164 |
> 165 | renderer(<Root />, rootElement, () => {
166 | apiRunner(`onInitialClientRender`)
167 |
168 | // Render query on demand overlay
View compiled
I guess the problem is related to Node and its dependencies. The repository is not an official Gatsby starter and the last commit dates from 3 years ago. Gatsby is now on version 4.14 while the starter is on ^2.0.50. Two major versions happened during the last 3 years only in Gatsby so imagine the rest of the dependencies.
The starter doesn't contain a .nvmrc file or engine property in the package.json so the Node version that runs that project is unknown. Be aware that if you clone or fork that project, you will have a lot of deprecated dependencies and you'll have several migrations to do (from v2 to v3 and from v3 to v4).
So my advice is to reject that repository and use one of the officials. If that's not an option, try playing around with the version of Node, starting from 12 onwards, reinstalling the node_modules each time you upgrade or downgrade the version.

Pass default 0 value to missing field in json log search in Sumo Logic

I am trying to parse aws ecr scan json logs to get vulnerabilities table report using below given query in SumoLogic. The issue is that aws.ecr sends the fields CRITICAL or HIGH only when those are found else it omits those fields. How to add CRITICAL field to 0 if CRITICAL is not found in json logs ?
I tried using isNull, isEmpty, isBlank but it seems I am missing something, please share your valuable advise. Thanks in advance.
_source="aws_ecr_events_test"
| json field=message "detail.repository-name" as repository_name
| json field=message "detail.image-tags" as tags
| json field=message "time" as last_scan
| json field=message "detail.finding-severity-counts.CRITICAL" as CRITICAL
| if(isNull("detail.finding-severity-counts.CRITICAL"), 0, CRITICAL) as CRITICAL
| json field=message "detail.finding-severity-counts.HIGH" as HIGH
| json field=message "detail.finding-severity-counts.MEDIUM" as MEDIUM
| json field=message "detail.finding-severity-counts.INFORMATIONAL" as INFORMATIONAL
| json field=message "detail.finding-severity-counts.LOW" as LOW
| json field=message "detail.finding-severity-counts.UNDEFINED" as UNDEFINED
| json field=message "detail.image-digest" as image_digest
| json field=message "detail.scan-status" as scan_status
| count by repository_name, tags, image_digest, scan_status, last_scan, CRITICAL, HIGH, MEDIUM, LOW, INFORMATIONAL, UNDEFINED
Example log:
detail:{finding-severity-counts:{LOW:1,HIGH:1}}
I think you're on the right track, but you might need a "nodrop" at the end of the parse line, otherwise Sumo Logic will just drop the records that don't match the json parse statement:
...
| json field=message "detail.finding-severity-counts.CRITICAL" as CRITICAL nodrop
| if(isNull("detail.finding-severity-counts.CRITICAL"), 0, CRITICAL) as CRITICAL

Dependency Parsing in Spacy

I want to extract the pair verb-noun of my text using dependency parsing.
I did this:
document = nlp('appoint department heads or managers and assign or delegate responsibilities to them ')
print ("{:<15} | {:<8} | {:<15} | {:<20}".format('Token','Relation','Head', 'Children'))
print ("-" * 70)
for token in document:
print ("{:<15} | {:<8} | {:<15} | {:<20}"
.format(str(token.text), str(token.dep_), str(token.head.text), str([child for child in token.children])))
from spacy import displacy
displacy.render(document, style = 'dep', jupyter=True )
Can you guys help me do a cleaner one?

Assign puppet Hash to hieradata yaml

I want to assign a hash variable from puppet to a hiera data structure but i only get a string.
Here is a example to illustrate, what I want. Finaly I don't want to access a fact.
1 ---
2 filesystems:
3 - partitions: "%{::partitions}"
And here is my debug code:
1 $filesystemsarray = lookup('filesystems',Array,'deep',[])
2 $filesystems = $filesystemsarray.map | $fs | {
3 notice("fs: ${fs['partitions']}")
4 }
5
6 notice("sda1: ${filesystemsarray[0]['partitions']['/dev/sda1']}")
The map leads to the following output:
Notice: Scope(Class[Profile::App::Kms]): fs: {"/dev/mapper/localhost--vg-root"=>{"filesystem"=>"ext4", "mount"=>"/", "size"=>"19.02 GiB", "size_bytes"=>20422066176, "uuid"=>"02e2ba2c-2ee4-411d-ac63-fc963c8026b4"}, "/dev/mapper/localhost--vg-swap_1"=>{"filesystem"=>"swap", "size"=>"512.00 MiB", "size_bytes"=>536870912, "uuid"=>"95ba4b2a-7434-48fd-9331-66443c752a9e"}, "/dev/sda1"=>{"filesystem"=>"ext2", "mount"=>"/boot", "partuuid"=>"de90a5ed-01", "size"=>"487.00 MiB", "size_bytes"=>510656512, "uuid"=>"398f2ab6-a7e8-4983-bd81-db03984fbd0e"}, "/dev/sda2"=>{"size"=>"1.00 KiB", "size_bytes"=>1024}, "/dev/sda5"=>{"filesystem"=>"LVM2_member", "partuuid"=>"de90a5ed-05", "size"=>"19.52 GiB", "size_bytes"=>20961034240, "uuid"=>"wLKRQm-9bdn-mHA8-M8bE-NL76-Gmas-L7Gp0J"}}
Seem to be a Hash as expected but the notice in Line 6 leads to:
Error: Evaluation Error: A substring operation does not accept a String as a character index. Expected an Integer at ...
What's my fault?

Trying to understand number of ParseError in html5lib-test

I was looking at following test case in html5lib-tests:
{"description":"<!DOCTYPE\\u0008",
"input":"<!DOCTYPE\u0008",
"output":["ParseError", "ParseError", "ParseError",
["DOCTYPE", "\u0008", null, null, false]]},
source
State |Input char | Actions
--------------------------------------------------------------------------------------------
Data State | "<" | -> TagOpenState
TagOpenState | "!" | -> MarkupDeclarationOpenState
MarkupDeclarationOpenState | "DOCTYPE" | -> DOCTYPE state
DOCTYPE state | "\u0008" | Parse error; -> before DOCTYPE name state (reconsume)
before DOCTYPE name state | "\u0008" | DOCTYPE(name = "\u0008"); -> DOCTYPE name state
DOCTYPE name state | EOF | Parse error. Set force quirks on. Emit DOCTYPE -> Data state.
Data state | EOF | Emit EOF.
I'm wondering where do those three errors come from? I can only track two, but I assume I'm making an error in logic, somewhere.
The one you're missing is the one from the "Preprocessing the input stream" section:
Any occurrences of any characters in the ranges U+0001 to U+0008, U+000E to U+001F, U+007F to U+009F, U+FDD0 to U+FDEF, and characters U+000B, U+FFFE, U+FFFF, U+1FFFE, U+1FFFF, U+2FFFE, U+2FFFF, U+3FFFE, U+3FFFF, U+4FFFE, U+4FFFF, U+5FFFE, U+5FFFF, U+6FFFE, U+6FFFF, U+7FFFE, U+7FFFF, U+8FFFE, U+8FFFF, U+9FFFE, U+9FFFF, U+AFFFE, U+AFFFF, U+BFFFE, U+BFFFF, U+CFFFE, U+CFFFF, U+DFFFE, U+DFFFF, U+EFFFE, U+EFFFF, U+FFFFE, U+FFFFF, U+10FFFE, and U+10FFFF are parse errors. These are all control characters or permanently undefined Unicode characters (noncharacters).
This causes a parse error prior to the U+0008 character ever reaching the tokenizer. Given the tokenizer is defined as reading from the input stream, the tokenizer tests assume the input stream has its normal preprocessing applied to it.