How to define a function that adds the values in HashMaps in Python? - function

The question is : create a function which will calculate the total stock worth in the cafe. You will need to remember to loop through the appropriate maps and lists to do this.
What I have so far :
menu = ("Coffee", "Tea", "Cake", "Cookies")
stock = {
"Coffee" : 10,
"Tea" : 17,
"Cake" : 15,
"Cookies" : 5,
}
price = {
"Coffee" : 'R 12',
"Tea" : 'R 11',
"Cake" : 'R 20',
"Cookies" : 'R 8',
}
def totalstock(stock):
Now I'm stuck, I know there should be a loop and a sum function, but I don't know how to convert the strings to ints so I can add them?

In this case your price dictionary doesn't just have numbers so you'll have to separate the R from the number. Example:
coffee_price = int(price['Coffee'].split(' ')[1])
To explain, take the string at price['Coffee'] and split it, giving a list with 2 values. Return the second value to the int() function to be converted to an integer and stored in coffee_price.

Related

How I can remove null values and keys from JSON using cobol statments- V6.1 COBOL ENTERPRISE

I can't figure out how to remove null values (and corresponding keys) from the output JSON.
Using JSON GENERATE, I am creating JSON output and in output I am getting \U0000 as null.
I want to identify and remove null values and keys from the JSON.
Cobol version - 6.1.0
I am generating a file with PIC X(5000). Tried INSPECT and other statements but no luck :(
For example :
{
"Item": "A",
"Price": 12.23,
"Qty": 123
},
{
"Item": "B",
"Price": \u000,
"Qty": 234
},
{
"Item": "C",
"Price": 23.2,
"Qty": \u0000
}
In output I want:
{
"Item": "A",
"Price": 12.23,
"Qty": 123
},
{
"Item": "B",
"Qty": 234
},
{
"Item": "C",
"Price": 23.2,
}
Approach 1:
created JSON using JSON GENERATE command and defined output file in
PIC X(50000).
after converting to UTF-8 trying to use INSPECT to find the '\U000' using hex value but there is no effect on UTF8 arguments and not able to search '\U000' values.
perform varying utf-8-pos from 1 by 1
until utf-8-pos = utf-8-end
EVAUATE TRUE
WHEN JSONOUT(1:utf-8-pos) = X'5C' *> first finding a "\" value in o/p
perform varying utf-8-pos from 1 by 1
until JSONOUT(1:utf-8-pos) = x'22' *> second finding a end position string " as X'22'
move JSONOUT(1: utf-8-end - utf-8-pos) to JSONOUT *> skip the position of null
end-perform
WHEN JSONOUT(1:utf-8-pos) NOT= X'5C'
continue
WHEN OTHER
continue
END-EVALUATE
end-perform
Approach 2:
Convert the item to UTF-16 in a national data item by using NATIONAL-OF function.
using INSPECT, EVALUATE OR PERFORM to find '\U000' using hex value N'005C'.
but not able to find the correct position of '\u000' also tried NX'005C' to find but no luck.
IDENTIFICATION DIVISION.
PROGRAM-ID. JSONTEST.
ENVIRONMENT DIVISION.
CONFIGURATION SECTION.
SOURCE-COMPUTER. IBM-370-158.
DATA DIVISION.
WORKING-STORAGE SECTION.
01 root.
05 header.
10 systemId PIC X(10).
10 timestamp PIC X(30).
05 payload.
10 customerid PIC X(10).
77 Msglength PIC 9(05).
77 utf-8-pos PIC 9(05).
77 utf-8-end PIC 9(05).
01 jsonout PIC X(30000).
PROCEDURE DIVISION.
MAIN SECTION.
MOVE "2012-12-18T12:43:37.464Z" to timestamp
MOVE LOW-VALUES to customerid
JSON GENERATE jsonout
FROM root
COUNT IN Msglength
NAME OF root is OMITTED
systemId IS 'id'
ON EXCEPTION
DISPLAY 'JSON EXCEPTION'
STOP RUN
END-JSON
DISPLAY jsonout (1:Msglength)
PERFORM skipnull.
MAIN-EXIT.
EXIT.
Skipnull SECTION.
perform varying utf-8-pos from 1 by 1
until utf-8-pos = utf-8-end
EVALUATE TRUE
WHEN JSONOUT(1:utf-8-pos) = X'5C' *> first finding a "\" value in o/p
perform varying utf-8-pos from 1 by 1
until JSONOUT(1:utf-8-pos) = x'22' *> second finding a end position string " as X'22'
move JSONOUT(1: utf-8-end - utf-8-pos) to JSONOUT *> skip the position of null
end-perform
WHEN JSONOUT(1:utf-8-pos) NOT= X'5C'
continue
WHEN OTHER
continue
END-EVALUATE
end-perform.
Skipnull-exit.
EXIT.
sample output : As we don't have any value to fill for customer id so in o/p results we are getting :
{"header" : {
"timestamp" : "2012-12-18T12:43:37.464Z",
"customerid" : "\u0000\u0000\u00000" }
}
in result I want to skip customerid from the o/p. I want to skip both object:Name from the o/p file.
Since Enterprise COBOL's JSON GENERATE is an all-in-one-go command I don't think there's an easy way to do this in V6.1.
Just to give you something to look forward to: Enterprise-COBOL 6.3 offers an extended SUPPRESS-clause that does just what you need:
JSON GENERATE JSONOUT FROM MYDATA COUNT JSONLEN
SUPPRESS WHEN ZERO
ON EXCEPTION DISPLAY 'ERROR JSON-CODE: ' JSON-CODE
NOT ON EXCEPTION DISPLAY 'JSON GENERATED - LEN=' JSONLEN
You can also suppress WHEN SPACES, WHEN LOW-VALUE or WHEN HIGH-VALUE.
You can also limit suppression to certain fields:
SUPPRESS Price WHEN ZERO
Qty WHEN ZERO
Unfortunately this feature hasn't been backported to 6.1 yet (it's been added to 6.2 with the December 2020 PTF) and I don't know whether it will be...
I don't know anything about cobol but I was need the same thing in javascript, so I will share my javascript function for you. If you can translate this to cobol maybe it will help to you.
function clearMyJson(obj) {
for (var i in obj) {
if ($.isArray(obj[i]))
if (obj[i].length == 0) //remove empty arrays
delete obj[i];
else
clearMyJson(obj[i]); //calling function for clear the array
else if ($.isPlainObject(obj[i]))
if ((obj[i] == null || obj[i] == "")) // delete property if its null or empty
delete obj[i];
else
clearMyJson(obj[i]); //calling function for clear the object
}
}

How do I concatenate a dynamic string value using Groovy to JSON response after it's parsed to get a specific node value in JSON

slurperresponse = new JsonSlurper().parseText(responseContent)
log.info (slurperresponse.WorkItems[0].WorkItemExternalId)
The above code helps me get the node value "WorkItems[0].WorkItemExternalId" using Groovy. Below is the response.
{
"TotalRecordCount": 1,
"TotalPageCount": 1,
"CurrentPage": 1,
"BatchSize": 10,
"WorkItems": [ {
"WorkItemUId": "4336c111-7cd6-4938-835c-3ddc89961232",
"WorkItemId": "20740900",
"StackRank": "0",
"WorkItemTypeUId": "00020040-0200-0010-0040-000000000000",
"WorkItemExternalId": "79853"
}
I need to append the string "WorkItems[0].WorkItemExternalId" (being read from a excel file) and multiple other such nodes dynamically to "slurperresponse" to get the value of nodes rather than directly hard coding as slurperresponse.WorkItems[0].WorkItemExternalId..
Tried append and "+" operator but i get a compilation error. What other way can I do this?
slurperrsesponse is an object its not a string that's why the concatenation does not work
Json Slurper creates an object out of the input string. This object is dynamic by nature, you can access it, you can add fields to it or alter the existing fields. Contatenation won't work here.
Here is an example:
import groovy.json.*
​def text = '{"total" : 2, "students" : [{"name": "John", "age" : 20}, {"name": "Alice", "age" : 21}] }'
def json = new JsonSlurper().parseText(text)​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​
json.total = 3 // alter the value of the existing field
json.city = 'LA' // add a totally new field
json.students[0].age++ // change the field in a list
println json​
This yields the output:
[total:3, students:[[name:John, age:21], [name:Alice, age:21]], city:LA]
Now if I've got you right you want to add a new student dynamically and the input is a text that you've read from Excel. So here is the example:
json.students << new JsonSlurper().parseText('{"name" : "Tom", "age" : 25}')
// now there are 3 students in the list
Update
Its also possible to get the values without 'hardcoding' the property name:
// option 1
println json.city // prints 'LA'
// option 2
println json.get('city') // prints 'LA' but here 'city' can be a variable
// option 3
println json['city'] // the same as option 2

Is there a way to randomize the jsonPath array number in get[] myJson?

I have a list of values that I can use for the title field in my json request. I would like to store a function in the common.feature file which randomizes the title value when a scenario is executed.
I have attempted using the random number function provided on the commonly needed utilities tab on the readme. I have generated a random number successfully, the next step would be using that randomly gernerated number within the jsonpath line in order to retrieve a value from my data list which is in json.
* def myJson =
"""
{
"title" : {
"type" : "string",
"enum" : [
"MR",
"MRS",
"MS",
"MISS"
[...]
]
}
}
"""
* def randomNumber = random(3)
* def title = get[0] myJson.title.enum
* print title```
The code above works but I would like to randomize the number within the get[0]. How is this possible in Karate?
I'm not sure of what you want, but can't you just replace 0 by randomNumber in get[randomNumber] myJson.title.enum ?

Serializing Multiple API Fields into one. Django

I have a pre-defined API, like:
{
time : some_time,
height : {1: 154, 2: 300, 3: 24},
color : {1: 'red', 2: 'blue', 3: 'green'},
age : {1: 27, 2: 324, 3: 1},
... many, many more keys.
}
I have no control of this API, so cannot change its structure.
Each integer key inside the sub dictionaries are linked and part of one record. For example the object that is 154 in height, is also colour: red and age: 27.
I am aware one strategy to work with this is to have separate serialisers for each field.
class MySerializer(serializers.ModelSerializer):
# Nested serializers
height = HeightSerializer()
colour = ColourSerializer()
age = AgeSerializer()
etc, etc, etc
But that still gives me messy data to work with, that requires lots of update() logic in the serializer.
What I instead want to do is have one nested serializer that has access to the full request data, and can work with height, colour and age simultaneously and return me something like from the to_internal_value() method:
{
['record' : 1, 'height': 154, 'colour' : 'red', 'age' : 27],
['record' : 2, 'height': 300, 'colour' : 'blue', 'age' : 324],
['record' : 3, 'height': 24, 'colour' : 'green', 'age' : 2],
}
But unfortunately the height serializer only seems to have access to information on fields called height. I am aware I can user source="foo" in the init call, but then it only has access to a field called "foo". I want it to have access to all fields.
I noticed there is a source='*' option, but it doesn't work. My init method of the serializer never gets called unless there is a key "height" in the api call.
Any ideas how I can have a nested serialiser that has access to all the data in the request?
Thanks
Joey

Couchbase : How to maintain arrays without duplicate elements?

We have a Couchbase store which has the Customer data.
Each customer has exactly one document in this bucket.
Daily transactions will result in making updates to this customer data.
Sample document. Let's focus on the purchased_product_ids array.
{
"customer_id" : 1000
"purchased_product_ids" : [1, 2, 3, 4, 5 ]
# in reality this is a big array - hundreds of elements
...
... many other elements ...
...
}
Existing purchased_product_ids :
[1, 2, 3, 4, 5]
products purchased today :
[1, 2, 3, 6] // 6 is a new entry, others existing already
Expected result after the update:
[1, 2, 3, 4, 5, 6]
I am using Subdocument API to avoid large data transfer between server and clients.
Option1 "arrayAppend" :
customerBucket.mutateIn(customerKey)
.arrayAppend("purchased_product_ids", JsonObject for [1,2,3,6] )
.execute();
It results in duplicate elements.
"purchased_product_ids" : [1, 2, 3, 4, 5, 1, 2, 3, 6]
Option2 "arrayAddUnique" :
customerBucket.mutateIn(customerKey)
.arrayAddUnqiue("purchased_product_ids", 1 )
.arrayAddUnqiue("purchased_product_ids", 2 )
.arrayAddUnqiue("purchased_product_ids", 3 )
.arrayAddUnqiue("purchased_product_ids", 6 )
.execute();
It throws exception for most of the times,
because those elements already existing.
Is there any better way to do this update ?
You could use N1QL, and the ARRAY_APPEND() and ARRAY_DISTINCT() functions.
UPDATE customer USE KEYS "foo"
SET purchased_product_ids = ARRAY_DISTINCT(ARRAY_APPEND(purchased_product_ids, 9))
Presumably this would be a prepared statement and the key itself and the new value would be supplied as parameters.
Also, if you want to add multiple elements to the array at once, ARRAY_CONCAT() would be a better choice. More here:
https://docs.couchbase.com/server/6.0/n1ql/n1ql-language-reference/arrayfun.html
Do you need purchased_product_ids to be ordered? If not you can convert it to a map, e.g.
{
"customer_id" : 1000
"purchased_product_ids" : {1: {}, 3: {}, 5: {}, 2: {}, 4: {}}
}
and then write to that map with subdoc, knowing you won't be conflicting (assuming product IDs are unique):
customerBucket.mutateIn(customerKey)
.upsert("purchased_product_ids.1", JsonObject.create()) // already exists
.upsert("purchased_product_ids.6", JsonObject.create()) // new product
.execute();
which will result in:
{
"customer_id" : 1000
"purchased_product_ids" : {1: {}, 3: {}, 6: {}, 5: {}, 2: {}, 4: {}}
}
(I've used JsonObject.create() as a placeholder here in case you need to associate additional information for each customer-order paid, but you could equally just write null. If you do need purchased_product_ids to be ordered, you can write the timestamp of the order, e.g. 1: {date: <TIMESTAMP>}, and then order it in code when you fetch.)