Does Bulk $import support transaction bundles(Accessing individual resources after posting) in Azure FHIR service - fhir-server-for-azure

I have posted a bulk import with a file containing group transaction bundles in ndjson format. After posting I see them to be stored as bundles itself in the FHIR store instead of individual resources. But according to documentation when we store data as transaction bundle the resources inside that bundle should be created individually, but the records are getting created as a bundle itself.
I am using following import body, let me know if any changes are to be done for the body
How to post transaction bundle using $import?
{
"resourceType": "Parameters",
"parameter": [
{
"name": "inputFormat",
"valueString": "application/fhir+ndjson"
},
{
"name": "mode",
"valueString": "InitialLoad"
},
{
"name": "input",
"part": [
{
"name": "type",
"valueString": "Bundle"
},
{
"name": "url",
"valueUri": "https:xyz/fhir-sample-import-data/patient_bundles_ndjson.ndjson"
}
]
}
]
}

Related

ambari rest API + set json configuration in ambari

To create a new config group, it is mandatory to provide a config group name, a tag and the cluster name to which it belongs. The tag as seen in this example is a name of the service. Two config groups with the same tag cannot be associated with the same host.
how to run the following json file with curl ?
in order to set this config group in ambari
POST /api/v1/clusters/c1/config_groups
[
{
"ConfigGroup": {
"cluster_name": "c1",
"group_name": "hdfs-nextgenslaves",
"tag": "HDFS",
"description": "HDFS configs for rack added on May 19, 2010",
"hosts": [
{
"host_name": "host1"
}
],
"desired_configs": [
{
"type": "core-site",
"tag": "nextgen1",
"properties": {
"key": "value"
}
}
]
}
}
]
 reference - https://github.com/swagle/test/blob/master/docs/api/v1/config-groups.md
Is your question about how to send multiline json with curl? You can find different methods here.

Is there any data returned from the Forge Data Management Search api to indicate a model is deleted?

When using GET projects/:project_id/folders/:folder_id/search, Forge Data Management API on a model with a deleted last version, is there a any information in the "attributes" or other returned data that indicates the file is deleted?
Currently, a second call to GET projects/:project_id/items/:item_id/versions is used to determine if the latest version is deleted (below) but it would be preferable to not call another request to get this information.
Returned JSON from /versions (with some data removed):
"data": [{
"type": "versions",
"id": "urn:adsk.wipprod:fs.file:vf.w0cwXPUwQziKIHtKBtYRaA?version=3",
"attributes": {
"versionNumber": 3,
"extension": {
"type": "versions:autodesk.core:Deleted",
"version": "1.0",
"schema": {
"href": "https://developer.api.autodesk.com/schema/v1/versions/versions:autodesk.core:Deleted-1.0"
},
"data": {
"originalName": "**.rvt"
}
}
}]
The json attribute.hidden = true seems to indicate deleted. This can be accessed via the filter[hidden] = true. I'm closing this as the correct answer.

How to define a variable within a JSON file and use it within JSON file

I'm trying to find if JSON file supports defining variables and using them within that JSON file?
{
"artifactory_repo": "toplevel_virtual_NonSnapshot",
"definedVariable1": "INSTANCE1",
"passedVariable2": "${passedFromOutside}",
"products": [
{ "name": "product_${definedVariable1}_common",
"version": "1.1.0"
},
{ "name": "product_{{passedVariable2}}_common",
"version": 1.5.1
}
]
}
I know YAML files allow this but now sure if JSON file allows this behavior or not. My plan is that a user will pass "definedVariable" value from Jenkins and I'll create a target JSON file (after substi
This might help you:
{
"artifactory_repo": "toplevel_virtual_NonSnapshot",
"definedVariable1": "INSTANCE1",
"passedVariable2": `${passedFromOutside}`,
"products": [
{ "name": `product_${definedVariable1}_common`,
"version": "1.1.0"
},
{ "name": `product_${passedVariable2}_common`,
"version": 1.5.1
}
]
}
*Note the use of `` instead of ''

Data Factory: AzureSQL in- and output for pipeline activity type AzureMLBatchExecution

In Azure Data Factory, I’m trying to call an Azure Machine Learning model by a Data Factory Pipeline. I want to use a Azure SQL table as input and another Azure SQL table for the output.
First I deployed a Machine Learning (classic) web service. Then I created an Azure Data Factory Pipeline, using a LinkedService (type= ‘AzureML’, using Request URI and API key of the ML-webservice) and a input and output dataset (‘AzureSqlTable’ type).
Deploying and Provisioning is succeeded. The pipeline starts as scheduled, but keeps ‘Running’ without any result. The pipeline activity is not being shown in the Monitor&Manage: Activity Windows.
On different sites and tutorials, I only find JSON-scripts using the activity type ‘AzureMLBatchExecution’ with BLOB in- and outputs. I want to use AzureSQL in- and output but I can’t get this working.
Can someone provide a sample JSON-script or tell me what’s possibly wrong with the code below?
Thanks!
{
"name": "Predictive_ML_Pipeline",
"properties": {
"description": "use MyAzureML model",
"activities": [
{
"type": "AzureMLBatchExecution",
"typeProperties": {},
"inputs": [
{
"name": "AzureSQLDataset_ML_Input"
}
],
"outputs": [
{
"name": "AzureSQLDataset_ML_Output"
}
],
"policy": {
"timeout": "02:00:00",
"concurrency": 3,
"executionPriorityOrder": "NewestFirst",
"retry": 1
},
"scheduler": {
"frequency": "Week",
"interval": 1
},
"name": "My_ML_Activity",
"description": "prediction analysis on ML batch input",
"linkedServiceName": "AzureMLLinkedService"
}
],
"start": "2017-04-04T09:00:00Z",
"end": "2017-04-04T18:00:00Z",
"isPaused": false,
"hubName": "myml_hub",
"pipelineMode": "Scheduled"
}
}
With a little help from a Microsoft technician, I've got this working. The JSON script as mentioned above is only changed in the schedule-section:
"start": "2017-04-01T08:45:00Z",
"end": "2017-04-09T18:00:00Z",
A pipeline is active only between its start time and end time. Because the scheduler is set to weekly, the pipeline is triggered at the start of the week: that date should be within start- and end date. For more details about scheduling, see: https://learn.microsoft.com/en-us/azure/data-factory/data-factory-scheduling-and-execution
The Azure SQL Input dataset should look like this:
{
"name": "AzureSQLDataset_ML_Input",
"properties": {
"published": false,
"type": "AzureSqlTable",
"linkedServiceName": "SRC_SQL_Azure",
"typeProperties": {
"tableName": "dbo.Azure_ML_Input"
},
"availability": {
"frequency": "Week",
"interval": 1
},
"external": true,
"policy": {
"externalData": {
"retryInterval": "00:01:00",
"retryTimeout": "00:10:00",
"maximumRetry": 3
}
}
}
I added the external and policy properties to this dataset (see script above) and after that, it worked.

.proj file to read and update data in .json file

I'm writing a build.proj file to build and deploy a application. I have one application.configuration file which contains some project configurations. application.configuration is written in json format.
{
"Clients": [
{
"Id": "xyz",
"Name": "test",
"Flow": "2"
}
],
"Scopes": [
{
"Name": "roles",
"DisplayName": "roles",
"Claims": [
{
"Name": "role"
}
]
}
]
}
In build.proj file a want to read and update data from this file so what should i use?
Because MSBUILD Tasks only reads and update data from .xml files.