Function field for many2many odoo 8 - function

I need to declare a many2many field as functional field. I tried below code but doesn't create any relational table in database.
def _get_function(self,cr,uid,ids,name,args,context=None):
resp={}
for data in self.browse(cr,uid,ids):
print'inside get Function'
return resp
'many2many_ids': fields.function(_get_function, method=True, relation="table.table1", obj='table1_table2_rel', type="many2many" , string="Many2Many")
Now data saved in form view, but i can't access that values in another function. like,
for data in self.browse(cr,uid,ids):
print'many2many_ids',data.many2many_ids
Here print no values.
How can i do that?

if you want the field to be stored use "store" option.
Also, you might consider using new API on v8. Check "Computed field" in official docs https://www.odoo.com/documentation/8.0/reference/orm.html

Related

Advanced mapping of JSON in Azure Data Factory - some guidance requested

I'm trying to map a JSON document (sensor data) into a more meaningful representation using Mapping Dataflows. However, hard time getting this to work and would really appreciate some insight/recommendations on how to solve the following:
The input is
What I would like to end up with is the following:
Any pointers as to how this can be implemented are more than welcome.
This can be accomplished using the Copy activity and then split function in Derived Column transformation in Azure Data Factory.
Use the copy activity to read the JSON file as source and in sink, use SQL database to store the data as table. In Mapping tab, Import the schema and map the JSON records to the corresponding column names. Refer this third-part tutorial for guidance - https://sqlkover.com/dynamically-map-json-to-sql-in-azure-data-factory/
Finally, use the Data Flow activity and choose the SQL table as source now which you have used as sink above.
Select the Derived Column transformation.
Use split function.
Add the column which will take the split values which you want to split as shown below.
Use split(<column_name_to_split>, '_') function to split the column on with _ delimiter. Change <column_name_to_split> to the name of column you cant to split. Refer image below.
Preview the data to check the result.

Azure Data Factory copy nested JSON to SQL table

Does anyone have an easy way to convert nested JSON to a flat SQL table? Just want to repeat the higher level data on each of the lower level detail. It looks like it can be done in mapping, I have tried as per the MS documentation but got a table full of NULL. Here is what I have tried and the result.
json
Option 1
Result: Only returns the first record of the ‘assignedLicences’
Option1
Option 2:
Returns multiple ‘assignedLicenses’ for each user, but only returns the first user id in each page.
Option2
Option3: as per the MS documentation
Result: returns all NULL values
Option3
You can have a try:
1.click import schemas button
2.if you have a JsonArray,select it.
3.you can directly see and edit the fields' JSON paths by opening Advanced editor .
Here is a Microsoft documentation about it.Please refer to this.
Hope this can help you.

How to include SR related work log long description when using maximo oslc rest api?

I am doing an HTTP GET request to /maximo/oslc/os/mxsr and using the oslc.select query string parameter to choose:
*,doclinks{*},worklog{*},rel.commlog{*},rel.woactivity{*,rel.woactivity{*}}
This lets me get related data, including related worklogs, but the worklog does not include the 'description_longdescription' field.
The only way I seem to be able to get that field is if I do a separate HTTP GET to query a worklog id directly through /maxrest/rest/mbo/worklog . Then it provides the description_longdescription field.
I understand this field is stored separately through the linked longdescription table, but I was hoping to get the data through the "next gen" oslc api with one http get request.
I've tried putting in 'worklog{*,description_longdescription}', as I read somewhere that longdescription is a "non-persistent" field and must be explicitly named for inclusion, but it had no effect.
I figured out that for the /maximo/oslc/os/mxsr object in the API, I needed to reference the related MODIFYWORKLOG object through the rel.modifyworklog syntax in the oslc.select query string:
oslc.select=*,doclinks{*},rel.modifyworklog{*,description_longdescription},rel.commlog{*},rel.woactivity{*,rel.woactivity{*}}
I also had to explicitly name the non-persistent field description_longdescription for it to be included.
Ref. for the "rel." syntax: https://developer.ibm.com/static/site-id/155/maximodev/restguide/Maximo_Nextgen_REST_API.html#_querying_maximo_asset_management_by_using_the_rest_api

Query Google Admin User directory comparing parameters

I'm trying to filter my users list by comparing two parameters
query="EmployeeData.EmployeeID=externalId"
EmployeeData.EmployeeID is a custom schema that is populated, with a cron job, with the same value as externalId.
Of course I let the cron do the field copy only if necessary, this is the reason I'm trying to filtering the users list.
In the way i wrote seems that the query trying to looking for a value "externalId" into the EmployeeData.EmployeeID ignoring that "externalId" is a even a field
any suggestion?
The way your code is written, the query sent to Google's servers is as you correctly guessed the following:
EmployeeData.EmployeeID=externalId where your actual externalId is not sent but rather the string "externalId".
To replace this string for the actual value of your variable, you can use what is called "string concatenation" 1. To do it, you just need to modify your code as shown below:
query="EmployeeData.EmployeeID=" + externalId;
This way, the query will be sent as you need to Google's servers.

Infusionsoft populate custom field via API

I am trying to set the value of a custom field that belongs to a contact via the Infusionsoft API.
When I try to create or update the contact with the custom field value set I receive the following response:
[NoFieldFound]No field found: Contact.InitialSiteVisitTime
Which leads me to believe that the custom field value is stored in a different table. Can anyone please tell me the name of the table that holds Infusionsofts custom field values?
Infusionsoft automatically prefixes custom field names with an underscore. Adding the underscore to the name allows to populate this value via the API successfully.
Note, that custom fields API names (the way they stored in Infusionsoft database) not always equal to how you name a custom field on creation.
This article shows where your actual custom fields API names are listed.