how to overwrite a one to many records in odoo through default API - many-to-many

how to overwrite a one to many records in odoo through odoo API ?
This is my create json, what change I want to make in this json to overwrite(Replace) the existing? lead_product_ids., now it is appending the records. Now i am getting multiple when update the records in this code instead of 0,0 what is the value,
Please help.
{
"jsonrpc": "2.0",
"params": {
"model": "crm.lead",
"method": "create",
"args": [
{
"type": "opportunity",
"name": "Fgtrdhjkkmmmmmmmm1290",
"pro_info": "Fggggggg hhhhhh jkkkkkkknjj hjkll",
"tag_ids": [
6,
0,
[
43,42
]
],
"purposes_id": 3,
"lead_product_ids": [
***0,
0,***
{
"product_uom": 21,
"product_id": 148,
"description": "",
"qty": 1,
"price_unit": 2448,
"expected_price": 2448,
"discount": 0,
"tax_id": [
6,
0,
[
22
]
],
"price_subtotal": 2741.760009765625
}
],
"partner_id": 1592,
"religion": 2,
"age_bucket": "40_45",
"phone": "5695324877",
"mobile": "5695324878",
"locations_id": 157,
"district_id": 157,
"state_id": 593
}
]
}
}

The answer is found in the docstring of the Model.write():
"""
...
This format is a list of triplets executed sequentially, where each
triplet is a command to execute on the set of records. Not all
commands apply in all situations. Possible commands are:
``(0, _, values)``
adds a new record created from the provided ``value`` dict.
``(1, id, values)``
updates an existing record of id ``id`` with the values in
``values``. Can not be used in :meth:`~.create`.
``(2, id, _)``
removes the record of id ``id`` from the set, then deletes it
(from the database). Can not be used in :meth:`~.create`.
``(3, id, _)``
removes the record of id ``id`` from the set, but does not
delete it. Can not be used on
:class:`~odoo.fields.One2many`. Can not be used in
:meth:`~.create`.
``(4, id, _)``
adds an existing record of id ``id`` to the set. Can not be
used on :class:`~odoo.fields.One2many`.
``(5, _, _)``
removes all records from the set, equivalent to using the
command ``3`` on every record explicitly. Can not be used on
:class:`~odoo.fields.One2many`. Can not be used in
:meth:`~.create`.
``(6, _, ids)``
replaces all existing records in the set by the ``ids`` list,
equivalent to using the command ``5`` followed by a command
``4`` for each ``id`` in ``ids``.
.. note:: Values marked as ``_`` in the list above are ignored and
can be anything, generally ``0`` or ``False``.
"""
It's (1, id, {'field_1': value_1,'field_2': value_2,}). But you should use write instead of create because in create it doesn't make any sense to change non-existing records of a x2many field.

Related

pandas json_normalize nested json where dictionary only exists on some records

I am trying to run pandas.json_normalize on a data file that has highly varied, nested json, where the content of the records can vary considerably.
I am processing a house listing file and trying to pull out prices. The prices data is stored as follows, and 'prices' is at the first nesting level within the json file:
"prices": [
{
"amountMax": 420000,
"amountMin": 420000,
"availability": "false",
"currency": "USD",
"dateSeen": [
"2020-12-21T11:57:17.190Z",
"2020-12-25T02:35:41.009Z"
],
"isSale": "false",
"isSold": "true",
"pricePerSquareFoot": 235,
"sourceURLs": [
"https://www.redfin.com/FL/Coconut-Creek/.../home/4146834"
]
}, # followed by additional entries
I am using the following line of code, which works if I edit the input file down to a single record that includes a 'prices' section:
df3 = pd.json_normalize(df['records'], record_path='prices',
meta=['id'],
errors='ignore'
)
However, the full file includes many records that do not include a prices section. If I run the code against a file with 2 records (one with, one without), it fails with KeyError: 'prices'
Clearly the 'errors='ignore'' in the json_normalize is not enough to handle the error.
What can I do? I would just like to skip the records without prices entirely.
A list comprehension on your JSON will do it. I've synthesized some JSON to match your description of input data.
js = {
"records": [
{
"prices": [
{
"amountMax": 420000,
"amountMin": 420000,
"availability": "false",
"currency": "USD",
"dateSeen": [
"2020-12-21T11:57:17.190Z",
"2020-12-25T02:35:41.009Z"
],
"isSale": "false",
"isSold": "true",
"pricePerSquareFoot": 235,
"sourceURLs": [
"https://www.redfin.com/FL/Coconut-Creek/.../home/4146834"
]
}
],
"id": 1
},{"id":2}
]
}
pd.json_normalize({"records":[r for r in js["records"] if "prices" in r.keys()]}["records"],record_path="prices",meta="id")

Pentaho Kettle: How to dynamically fetch JSON file columns

Background: I work for a company that basically sells passes. Every order that is placed by the customer will contain N number of passes.
Issue: I have these JSON event-transaction files coming into a S3 bucket on a daily basis from DocumentDB (MongoDB). This JSON file is associated to the relevant type of event (insert, modify or delete) for every document key (which is an order in my case). The example below illustrates a "Insert" type of event that came through to the S3 bucket:
{
"_id": {
"_data": "11111111111111"
},
"operationType": "insert",
"clusterTime": {
"$timestamp": {
"t": 11111111,
"i": 1
}
},
"ns": {
"db": "abc",
"coll": "abc"
},
"documentKey": {
"_id": {
"$uuid": "abcabcabcabcabcabc"
}
},
"fullDocument": {
"_id": {
"$uuid": "abcabcabcabcabcabc"
},
"orderNumber": "1234567",
"externalOrderId": "12345678",
"orderDateTime": "2020-09-11T08:06:26Z[UTC]",
"attraction": "abc",
"entryDate": {
"$date": 2020-09-13
},
"entryTime": {
"$date": 04000000
},
"requestId": "abc",
"ticketUrl": "abc",
"tickets": [
{
"passId": "1111111",
"externalTicketId": "1234567"
},
{
"passId": "222222222",
"externalTicketId": "122442492"
}
],
"_class": "abc"
}
}
As we see above, every JSON file might contain N number of passes and every pass is - in turn - is associated to an external ticket id, which is a different column (as seen above). I want to use Pentaho Kettle to read these JSON files and load the data into the DW. I am aware of the Json input step and Row Normalizer that could then transpose "PassID 1", "PassID 2", "PassID 3"..."PassID N" columns into 1 unique column "Pass" and I would have to have to apply a similar logic to the other column "External ticket id". The problem with that approach is that it is quite static, as in, I need to "tell" Pentaho how many Passes are coming in advance in the Json input step. However what if tomorrow I have an order with 10 different passes? How can I do this dynamically to ensure the job will not break?
If you want a tabular output like
TicketUrl Pass ExternalTicketID
---------- ------ ----------------
abc PassID1Value1 ExTicketIDvalue1
abc PassID1Value2 ExTicketIDvalue2
abc PassID1Value3 ExTicketIDvalue3
And make incoming value dynamic based on JSON input file values, then you can download this transformation Updated Link
I found everything work dynamic in JSON input.

Ignore one key while matching two Json responce

I am comparing two JSON responses from two different end points and trying match them in automation with Rest Assured.
All is fine except one key "secondsToNextEvent" which have a different value in both Json due to time difference in Get request.
{
"englishName": "Serie A",
"boCount": 103,
"termKey": "serie_a",
"name": "Serie A",
"eventCount": 17,
"id": 1000095001,
"secondsToNextEvent": 1649,
"sport": "FOOTBALL"
},
so I would like to compare both Json but want to ignore "secondsToNextEvent". is there any method to do that?

Extract multiple values with JSON extractor

I'm getting following data in response of a request:
{
"items": [
{
"id": 54925,
"currCode": "USD",
"lastUpdated": 1531233169000
},
{
"id": 54926,
"currCode": "USD",
"lastUpdated": 1531233169000
},
{
"id": 54927,
"currCode": "USD",
"lastUpdated": 1531233169000
}
],
"totalCount": 3
}
As we can see there are three different ids in the data(54925,54926,54927)
I want to perform iterate over all these ids and perform some operation( basically I want to use like foreach(String id: ids) { request(id);}
I added a JSON extractor as follows:
As per my research(research link) it's supposed to store all the ids in the id_list
After this added a foreach loop to iterate over these values:
But somehow the it's not going inside this for loop. What I'm doing wrong here?
is there any other way to fetch all these ids and loop through them?
You forgot to mention in JSON Extractor you expect it to return all values by setting Match No. as -1
-1 means extract all results, they will be named as _N

Python 2.7: Generate JSON file with multiple query results in nested dict

What started as my personal initiative, ended up being a quiet interesting ( may I say, challenging to some degree) project. My company decided to phase out one product and replace it with new one, which instead of storing data in mdb files, uses JSON files. So I took the initiative to create a converter that will read already created mdb files and convert them into the new format JSON.
However, now I'm at wits-ends with this one:
I can read mdb files, run query to extract specific data.
By placing the targetobj inside the FOR LOOP, I managed to extract data for each row and fed into a dict(targetobj)
for val in rows:
targetobj={"connection_props": {"port": 7800, "service": "", "host": val.Hostname, "pwd": "", "username": ""},
"group_list": val.Groups, "cpu_core_cnt": 2, "target_name": "somename", "target_type": "somethingsamething",
"os": val.OS, "rule_list": [], "user_list": val.Users}
if I print targetobj to console, I can clearly get all extracted values for each row.
Now, my quest is to have the obtained results ( for each row), inserted into the main_dict under the key targets:[]. ( Please see sample of JSON file for illustration)
main_dict = {"changed_time": 0, "year": 0, "description": 'blahblahblah', 'targets':[RESULTS FROM TARGETOBJ SHOULD BE ADDED HERE],"enabled": False}
so for example my Json file should have structure such as:
{"changed_time":1234556,
"year":0,
"description":"blahblahblah",
"targets":[
{"group_list":["QA"],
"cpu_core_cnt":1,
"target_name":"NewTarget",
"os":"unix",
"target_type":"",
"rule_list":[],
"user_list":[""],"connection_props":"port":someport,"service":"","host":"host1","pwd":"","username":""}
},
{"group_list":[],
"cpu_core_cnt":2,
"target_name":"",
"os":"unix",
"target_type":"",
"rule_list":[],
"user_list":["Web2user"],
"connection_props":{"port":anotherport,"service":"","host":"host2","pwd":"","username":""}}
],
"enabled":false}
So far I've been tweaking here and there, to have the results written as intended, however each time,I'm getting only the last row values written.
ie.: putting the targetobj as a variable inside the targets:[]
{"changed_time": 0, "year": 0, "description": 'ConvertedConfigFile', 'targets':[targetobj],
I know I'm missing something, I just need to find what and where.
Any help would be highly appreciated.
thank you
Just create your main_dict first and append to it in your loop, i.e.:
main_dict = {"changed_time": 0,
"year": 0,
"description": "blahblahblah",
"targets": [], # a new list for the target objects
"enabled": False}
for val in rows:
main_dict["targets"].append({ # append this dict to the targets list of main_dict
"connection_props": {
"port": 7800,
"service": "",
"host": val.Hostname,
"pwd": "",
"username": ""},
"group_list": val.Groups,
"cpu_core_cnt": 2,
"target_name": "somename",
"target_type": "somethingsamething",
"os": val.OS,
"rule_list": [],
"user_list": val.Users
})