ServiceNow - JSON Web Service, display related tables - json

I'm working on a C# program that retrieves data from a ServiceNow database and converts that data into C# .NET objects. I'm using the JSON Web Service to return my data in JSON format.
What I want to achieve is as follows: If there is a relational mapping between a value (for
example: I have a table called Company, where CEO is not a TEXT field but an sys_id to a Employee Table) I want to be able to output that data not with an sys_id (or just displaying the name property by using the 'displayvariable' parameter) but by an object displayed in JSON.
This means that the value of a property should be an object in JSON instead of just a single value.
A few examples:
// I don't want the JSON like this
{"Company":{"CEO":"b181e841c9212c008aeb36850331fab2"}}
// Or by displaying the name of the sys_id table
{"Company":{"CEO":"James Henderson" }}
// I want the data as follows, so I can have all the data I need inside a single JSON record.
{"Company":{"CEO":{"name":"James Henderson", "age":34, "sex":"male", "office":"SBN Left Floor 23"}}}
From reading the documentation I couldn't find anything in the JSON Web Service that allowed me to display the information like this nor
find any other alternative. It should have something to do with joining the tables and displaying it all in the right format.

I have been using SNC for almost three years and have not found you can automatically join tables in a web service. Your best option would be to use a scripted web service which possibly takes a query parameter and table parameter. Then you can json serialized your result as you see fit.
Or, another option would be to generate a new processor that will traverse the GlideRecord object. The ?JSON parameter you pass in to the URL is merely a flag to pass your request to a particular processor. Unfortunately the OOB one I believe is a Java class not a JS script, so you would need to write a script much like I mentioned earlier to traverse the object path serializing the object graph as far down as your want to go.

Related

Delete from json without the use of a 'real' id

In Angular:
I'm trying to delete items from a local json server using a http request.
The problem is that the items don't have 'real' id's. Their id's are strings which json doesn't recognises as id's (so far I know).
So when I try to search for an id (either to get it or delete it) I have to use for example:
"http://localhost:3000/watchlist?imdbID=tt5745872"
which gives an array with 1 item.
When using this in a delete request, it will result in a 404.
I was wondering if there is some kind of a workaround for doing this or do I really have to implement 'real' id's?
Context: I'm getting movies from an API and I then store those in an json server. As the API uses string id's, it would be a pain in the ass to try and implement a second id for the same object.

Dynamically refer to Json value in Data Factory copy

I have ADF CopyRestToADLS activity which correctly saves json complex object to Data Lake storage. But I additionally need to pass one of the json values (myextravalue) to a stored procedure. I tried referencing it in the stored procedure parameter as #{activity('CopyRESTtoADLS').output.myextravalue but I am getting error
The actions CopyRestToADLS refernced by 'inputs' in the action ExectuteStored procedure1 are not defined in the template
{
"items": [1000 items],
"count": 1000,
"myextravalue": 15983444
}
I would like to try to dynamically reference this value because the CopyRestToADLS source REST dataset dynamically calls different REST endpoints so the structure of JSON object is different each time. But the myextravalue is always present in each JSON call.
How is it possible to refernce myextravalue and use it as a parameter?
Rich750
You could create another lookup active on REST data source to get the json value. Then pass it to the Stored Procedure active.
Yes, it will create a new REST request, and it seams to be an easy way to achieve your purpose. Lookup active to get the content of the source and won't save it.
The another solution may be get the value from the copy active output file, after the copy active completed.
I'm glad you solved it by this way:
"I created a Data Flow to read from the folder where Copy Activity saves dynamically named output json filenames. After importing schema from sample file, I selected the myextravalue as the only mapping in the Sink Mapping section."

How to include SR related work log long description when using maximo oslc rest api?

I am doing an HTTP GET request to /maximo/oslc/os/mxsr and using the oslc.select query string parameter to choose:
*,doclinks{*},worklog{*},rel.commlog{*},rel.woactivity{*,rel.woactivity{*}}
This lets me get related data, including related worklogs, but the worklog does not include the 'description_longdescription' field.
The only way I seem to be able to get that field is if I do a separate HTTP GET to query a worklog id directly through /maxrest/rest/mbo/worklog . Then it provides the description_longdescription field.
I understand this field is stored separately through the linked longdescription table, but I was hoping to get the data through the "next gen" oslc api with one http get request.
I've tried putting in 'worklog{*,description_longdescription}', as I read somewhere that longdescription is a "non-persistent" field and must be explicitly named for inclusion, but it had no effect.
I figured out that for the /maximo/oslc/os/mxsr object in the API, I needed to reference the related MODIFYWORKLOG object through the rel.modifyworklog syntax in the oslc.select query string:
oslc.select=*,doclinks{*},rel.modifyworklog{*,description_longdescription},rel.commlog{*},rel.woactivity{*,rel.woactivity{*}}
I also had to explicitly name the non-persistent field description_longdescription for it to be included.
Ref. for the "rel." syntax: https://developer.ibm.com/static/site-id/155/maximodev/restguide/Maximo_Nextgen_REST_API.html#_querying_maximo_asset_management_by_using_the_rest_api

How can I reference a JSON source for a derived column action in Azure Data Factory

I'm new to Azure Data Factory. I've been able to generate a set of JSON files from a REST API source using a Pipeline. Each file consists of one top level JSON object with an array of up to 100 child objects. The output is saved to an Azure Blob Storage container.
I now want to use a Mapping Data Flow to modify the JSON before I write it to Azure SQL, however I'm struggling with the syntax. I've configured the source to point to the directory containing the JSON files. The Source Projection tab displays the correct schema. I can preview the data and I see a row for each file and I can expand the child objects to see the full structure.
However, when I add a Derived Column action, the Input Schema is blank in the Expression Builder. I can refer to the top level elements in the source using the byName and byPosition functions, but I don't know how I can reference the child elements.
The examples that I have been able to find online use a SQL table or CSV file as a source. I can't find any examples that use hierarchical data as the source for a derived column.
Am I missing something? Is this scenario supported?
I found a way to achieve what I want. This may not be the best approach, but it works.
It seems that it is difficult to deal with JSON that has multiple hierarchies as a source for copy data activities. You can choose one level of repeating data to map to a table structure (the Collection Reference property on the Mapping tab).
In my scenario, there was additional repeating data within the data I was mapping to my table. I updated the mapping to write the child JSON data to a text field in my SQL table. To do this, I needed to use the Azure Data Factory JSON editor for my pipeline. You can access this from the "Code" link in the top right corner of the pipeline visual editor.
I added the following line after the closing bracket for the "mappings" array for my copy activity:
"mapComplexValuesToString": true
The full path to the mapping array in the activity definition is typeProperties - translator - mappings. Make sure your commas are correct after you add the new element.
With this approach, I had a row in my SQL table for each array item in my Collection Reference. The scalar child elements in the array items are mapped to table columns and the child JSON element is written to a data column in the same table.
To extract the values I need within the child JSON, I created a SQL view that uses the CROSS APPLY OPENJSON syntax. This allows me to treat the JSON in the data field similar to a related table. You can specify the structure that your JSON is in. If you have nested data in your JSON, you can apply the same approach for each level.
The OPENJSON command is only supported by more recent versions of SQL Server. I'm using Azure SQL, so that works for me.

using a sort compare function on hierarchical data

I am attempting to use sortcomparefunction to ensure a specific data entry is the last object of its parent. However when using the sortcomparefunction the objects being passed in are always the root object. is there another way i could sort my data to ensure the object named "Upload" is always at the bottom. Just to clarify im trying to sort the data held in objects 1 2 and 3 fo4r example. My data is structured as below
`rootObject -Object1-namedObject1
namedObject2
Upload
Object2- stackObject
Upload
errorprofile
Object3-images
Upload
Video`