How to search a Claim using extension field - fhir-server-for-azure

I have a Claim payload, in which I have added an extension block: (not sure how where the url came from)
"extension" : [{
"url" : "http://hl7.org/fhir/StructureDefinition/iso-21090-EN-use",
"valueString" : "MAPD"
}],
I want search this claim record using the extension but don't know how to do it.
I tried using GET request to https://<azure_fhir_server>/Claim?extension=MAPD but it says
{
"severity": "warning",
"code": "not-supported",
"diagnostics": "The search parameter 'extension' is not supported for resource type 'Claim'."
}
=====================
EDIT:
As suggested by #Nik Klassen, I posted the following payload to /SearchParameter
{
"resourceType" : "SearchParameter",
"id": "b072f860-7ecd-4d73-a490-74acd673f8d2",
"name": "extensionValueString",
"status": "active",
"url" : "http://hl7.org/fhir/SearchParameter/extension-valuestring",
"description": "Returns a Claim with extension.valueString matching the specified one in request.",
"code" : "lob",
"base" : [
"Claim"
],
"type" : "string",
"expression" : "Claim.extension.where(url ='http://hl7.org/fhir/SearchParameter/extension-valuestring').extension.value.string"
}
Also, did the $reindex on Claim, but the couldnt find the column lob($reindex response is below):
{
"resourceType": "Parameters",
"id": "ee8786d2-616a-4b81-8f6a-8089591b1225",
"meta": {
"versionId": "1"
},
"parameter": [
{
"name": "_id",
"valueString": "28e808d6-e420-4a33-bb0b-7cd325c8c169"
},
{
"name": "status",
"valueString": "http://hl7.org/fhir/fm-status|active"
},
{
"name": "priority",
"valueString": "http://terminology.hl7.org/CodeSystem/processpriority|normal"
},
{
"name": "facility",
"valueString": "Location/Location"
},
{
"name": "patient",
"valueString": "Patient/f8d8477c-1ef4-4878-abed-51e514bfd91f"
},
{
"name": "encounter",
"valueString": "Encounter/67062d00-2531-3ebd-8558-1de2fd3e5aab"
},
{
"name": "use",
"valueString": "http://hl7.org/fhir/claim-use|claim"
},
{
"name": "identifier",
"valueString": "TEST"
},
{
"name": "_lastUpdated",
"valueString": "2021-08-25T07:39:15.3050000+00:00"
},
{
"name": "created",
"valueString": "1957-04-12T21:23:35+05:30"
}
]
}
I read somewhere I need to create StructureDefinition, but don't know how to do that.
Basically I want to add a field "LOB" as an extension to all my resources, and search them using: GET: https://fhir_server/{resource}?lob=<value>

By default you can only search on fields that are part of the FHIR spec. These are listed in a "Search Parameters" section on the page for each resource type, i.e. https://hl7.org/fhir/claim.html#search. To search on extensions you will need to create a custom SearchParameter https://learn.microsoft.com/en-us/azure/healthcare-apis/fhir/how-to-do-custom-search, i.e.
POST {{FHIR_URL}}/SearchParameter
{
"resourceType" : "SearchParameter",
"id" : "iso-21090-EN-use",
"url" : "ttp://hl7.org/fhir/SearchParameter/iso-21090-EN-use",
... some required fields ...
"code" : "iso-use",
"base" : [
"Claim"
],
"type" : "token",
"expression" : "Claim.extension.where(url = 'http://hl7.org/fhir/StructureDefinition/iso-21090-EN-use').value.string"
}

Related

Error with JSON webhook - Sending data with JSON

I have problem with webhook, or to be more accurate - with sending data with POST method using endpoint.
I am using this endpoint for POST method:
https://edapi.campaigner.com/v1/Import/AddOrUpdate?ApiKey=apikey_value
and this JSON snippet:
{
"Subscribers": [
{
"EmailAddress": "email",
"CustomFields": [
{
"FieldName": "Source",
"Value": "source"
},
{
"FieldName": "Campaign",
"Value": "campaign"
},
{
"FieldName": "Medium",
"Value": "medium"
}
],
"Lists": [
200468800
]
}
]
}
But, after I set automation workflow to trigger transfer data from one database (provider 1) to another base (provider 2) I get error:
{
"ContactsSubmitted": 1,
"Successes": 0,
"Failures": [
{
"EmailAddress": "email",
"ErrorCode": 101,
"Message": "Invalid Email Address"
}
]
}
Any suggestions? Additional explanation: FieldName is name from provider 2 and field value is name from provider 1.
Missing [some_variable ] is part where my code throws and error. So, the right code is:
{
"Subscribers": [
{
"EmailAddress": "[email]",
"CustomFields": [
{
"FieldName": "Source",
"Value": "[source]"
},
{
"FieldName": "Campaign",
"Value": "[campaign]"
},
{
"FieldName": "Medium",
"Value": "[medium]"
}
],
"Lists": [
200468800
]
}
]
}

Elasticsearch query with nested sets

I am pretty new to Elasticsearch, so please bear with me and let me know if I need to provide any additional information. I have inherited a project and need to implement new search functionality. The document/mapping structure is already in place but can be changed if it can not facilitate what I am trying to achieve. I am using Elasticsearch version 5.6.16.
A company is able to offer a number of services. Each service offering is grouped together in a set. Each set is composer of 3 categories;
Product(s) (ID 1)
Process(es) (ID 3)
Material(s) (ID 4)
The document structure looks like;
[{
"id": 4485,
"name": "Company A",
// ...
"services": {
"595": {
"1": [
95, 97, 91
],
"3": [
475, 476, 471
],
"4": [
644, 645, 683
]
},
"596": {
"1": [
91, 89, 76
],
"3": [
476, 476, 301
],
"4": [
644, 647, 555
]
},
"597": {
"1": [
92, 93, 89
],
"3": [
473, 472, 576
],
"4": [
641, 645, 454
]
},
}
}]
In the above example; 595, 596 and 597 are IDs relating to the set. 1, 3 and 4 relate to the categories (mentioned above).
The mapping looks like;
[{
"id": {
"type": "long"
},
"name": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"services": {
"properties": {
// ...
"595": {
"properties": {
"1": {"type": "long"},
"3": {"type": "long"},
"4": {"type": "long"}
}
},
"596": {
"properties": {
"1": {"type": "long"},
"3": {"type": "long"},
"4": {"type": "long"}
}
},
// ...
}
},
}]
When searching for a company that provides a Product (ID 1) - a search of 91 and 95 which would return Company A because those IDs are within the same set. But if I was to search 95 and 76, it would not return Company A - while the company does do both of these products, they are not in the same set. These same rules would apply when searching Processes and Materials or a combination of these.
I am looking for confirmation that the current document/mapping structure will facilitate this type of search.
If so, given 3 arrays of IDs (Products, Processes and Materials), what is the JSON to find all companies that provide these services within the same set?
If not, how should the document/mapping be changed to allow this search?
Thank you for your help.
It is a bad idea to have ID for what appears as a value as a field itself as that could lead to creation of so many inverted indexes, (remember that in Elasticsearch, inverted index is created on every field) and I feel it is not reasonable to have something like that.
Instead change your data model to something like below. I have also included sample documents, the possible queries you can apply and how the response can appear.
Note that just for sake of simplicity, I'm focussing only on the services field that you have mentioned in your mapping.
Mapping:
PUT my_services_index
{
"mappings": {
"properties": {
"services":{
"type": "nested", <----- Note this
"properties": {
"service_key":{
"type": "keyword" <----- Note that I have mentioned keyword here. Feel free to use text and keyword if you plan to implement partial + exact search.
},
"product_key": {
"type": "keyword"
},
"product_values": {
"type": "keyword"
},
"process_key":{
"type": "keyword"
},
"process_values":{
"type": "keyword"
},
"material_key":{
"type": "keyword"
},
"material_values":{
"type": "keyword"
}
}
}
}
}
}
Notice that I've made use of nested datatype. I'd suggest you to go through that link to understand why do we need that instead of using plain object type.
Sample Document:
POST my_services_index/_doc/1
{
"services":[
{
"service_key": "595",
"process_key": "1",
"process_values": ["95", "97", "91"],
"product_key": "3",
"product_values": ["475", "476", "471"],
"material_key": "4",
"material_values": ["644", "645", "643"]
},
{
"service_key": "596",
"process_key": "1",
"process_values": ["91", "89", "75"],
"product_key": "3",
"product_values": ["476", "476", "301"],
"material_key": "4",
"material_values": ["644", "647", "555"]
}
]
}
Notice how you can now manage your data, if it ends up having multiple combinations or product_key, process_key and material_key.
The way you interpret the above document is that, you have two nested documents inside a document of my_services_index.
Sample Query:
POST my_services_index/_search
{
"_source": "services.service_key",
"query": {
"bool": {
"must": [
{
"nested": { <---- Note this
"path": "services",
"query": {
"bool": {
"must": [
{
"term": {
"services.service_key": "595"
}
},
{
"term": {
"services.process_key": "1"
}
},
{
"term": {
"services.process_values": "95"
}
}
]
}
},
"inner_hits": {} <---- Note this
}
}
]
}
}
}
Note that I've made use of Nested Query.
Response:
{
"took" : 3,
"timed_out" : false,
"_shards" : {
"total" : 1,
"successful" : 1,
"skipped" : 0,
"failed" : 0
},
"hits" : {
"total" : {
"value" : 1,
"relation" : "eq"
},
"max_score" : 1.828546,
"hits" : [ <---- Note this. Which would return the original document.
{
"_index" : "my_services_index",
"_type" : "_doc",
"_id" : "1",
"_score" : 1.828546,
"_source" : {
"services" : [
{
"service_key" : "595",
"process_key" : "1",
"process_values" : [
"95",
"97",
"91"
],
"product_key" : "3",
"product_values" : [
"475",
"476",
"471"
],
"material_key" : "4",
"material_values" : [
"644",
"645",
"643"
]
},
{
"service_key" : "596",
"process_key" : "1",
"process_values" : [
"91",
"89",
"75"
],
"product_key" : "3",
"product_values" : [
"476",
"476",
"301"
],
"material_key" : "4",
"material_values" : [
"644",
"647",
"555"
]
}
]
},
"inner_hits" : { <--- Note this, which would tell you which inner document has been a hit.
"services" : {
"hits" : {
"total" : {
"value" : 1,
"relation" : "eq"
},
"max_score" : 1.828546,
"hits" : [
{
"_index" : "my_services_index",
"_type" : "_doc",
"_id" : "1",
"_nested" : {
"field" : "services",
"offset" : 0
},
"_score" : 1.828546,
"_source" : {
"service_key" : "595",
"process_key" : "1",
"process_values" : [
"95",
"97",
"91"
],
"product_key" : "3",
"product_values" : [
"475",
"476",
"471"
],
"material_key" : "4",
"material_values" : [
"644",
"645",
"643"
]
}
}
]
}
}
}
}
]
}
}
Note that I've made use of keyword datatype. Please feel free to use the datatype as and what your business requirements would be for all the fields.
The idea I've provided is to help you understand the document model.
Hope this helps!

Real value not recognized sending JSON data from Kinesis Firehose to elasticsearch

I have an issue in Kibana with the field value explained in the following lines. I'll try to explain the situation.
I'm sending dynamoDB streams to Lambda then to Kenesis Firehouse and finally from Firehose to Elasticsearch. I'm using Kibana to visualize data and here is where I have the issue.
Lets say that I'm sending this JSON to DynamoDB:
{
"id": "identificator",
"timestamp": "2017-05-09T06:38:00.337Z",
"value": 33,
"units": "units",
"description": "This is the description",
"machine": {
"brand": "brand",
"application": "application"
}
}
In Lambda I receive the following:
{
"data": {
"M": {
"machine": {
"M": {
"application": {
"S": "application"
},
"brand": {
"S": "band"
}
}
},
"description": {
"S": "This is the description"
},
"id": {
"S": "identificator"
},
"units": {
"S": "units"
},
"value": {
"N": "33"
},
"_msgid": {
"S": "85209b75.f51ee8"
},
"timestamp": {
"S": "2017-05-09T06:38:00.337Z"
}
}
},
"id": {
"S": "85209b75.f51ee8"
}
}
If I forward this last JSON to Kinesis Firehose, when in Kibana I configure the index pattern, it recognizes the "timestamp" automatically (and that's great). The problem here, is that the field "value" is like a string and it is not recognized.
I tried to modify the JSON and then send it again to Firehose but then Kibana doesn't recognizes the "timestamp":
{
"data": {
"machine": {
"application": "application",
"brand": "brand"
},
"description": "This is the description",
"id": "identificator",
"units": "KWh",
"value": 33,
"_msgid": "85209b75.f51ee8",
"timestamp": "2017-05-09T06:38:00.337Z"
},
"id": "85209b75.f51ee8"
}
I would like to know how could I send this data and Kibana recognizes the "timestamp" and "value" fields.
This is an example of the code that I'm using in lambda:
var AWS = require('aws-sdk');
var unmarshalJson = require('dynamodb-marshaler').unmarshalJson;
var firehose = new AWS.Firehose();
exports.lambda_handler = function(event, context) {
var record = JSON.stringify(event.Records[0].dynamodb.NewImage);
console.log("[INFO]:"+JSON.stringify(event.Records[0].dynamodb.NewImage));
var params = {
DeliveryStreamName: 'DeliveryStreamName',
Record:{
Data: record
}
};
firehose.putRecord(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(JSON.stringify(data)); // successful response
context.done();
});
};
I solved it creating the index mapping by myself instead of let Kinesis Firehose create it. And declare the "timestamp" attribute as { "type" : "date" } and the "value" attibute as { "type" : "float" }
For instance for this type of JSON:
{
"data": {
"timestamp": "2017-05-09T11:30:41.484Z",
"tag": "tag",
"value": 33,
"units": "units",
"type": "type",
"machine":{
"name": "name",
"type": "type",
"company": "company"
}
},
"id": "85209b75.f51ee8"
}
I created manually the following elasticsearch index and mapping:
PUT /index
{
"settings" : {
"number_of_shards" : 2
},
"mappings" : {
"type" : {
"properties" : {
"data" : {
"properties" : {
"machine":{
"properties": {
"name": { "type" : "text" },
"type": { "type" : "text" },
"company": { "type" : "text" }
}
},
"timestamp": { "type" : "date" },
"tag" : { "type" : "text" },
"value": { "type" : "float" },
"description": { "type" : "text" },
"units": { "type" : "text" },
"type" : { "type" : "text" },
"_msgid": { "type" : "text" }
}
},
"id": { "type" : "text" }
}
}
}
}
So, to solve it, the better solution I think that in lambda you have to check if the index mapping exist and if not create it by yourself.

setting required on a json-schema array

I am trying to figure out how to set required on my json-schema array of objects. The required property works fine on an object just not an array.
Here is the items part of my json schema:
"items": {
"type": "array",
"properties": {
"item_id": {"type" : "number"},
"quantity": {"type": "number"},
"price": {"type" : "decimal"},
"title": {"type": "string"},
"description": {"type": "string"}
},
"required": ["item_id","quantity","price","title","description"],
"additionalProperties" : false
}
Here is the json array I am sending over. The json validation should fail since I am not passing a description in these items.
"items": [
{
"item_id": 1,
"quantity": 3,
"price": 30,
"title": "item1 new name"
},
{
"item_id": 1,
"quantity": 16,
"price": 30,
"title": "Test Two"
}
]
I got it to work using this validator by nesting the part of the schema for the array elements inside a object with the name items. The schema now has two nested items fields, but that is because one is a keyword in JSONSchema and the other because your JSON actually has a field called items
JSONSchema:
{
"type":"object",
"properties":{
"items":{
"type":"array",
"items":{
"properties":{
"item_id":{
"type":"number"
},
"quantity":{
"type":"number"
},
"price":{
"type":"number"
},
"title":{
"type":"string"
},
"description":{
"type":"string"
}
},
"required":[
"item_id",
"quantity",
"price",
"title",
"description"
],
"additionalProperties":false
}
}
}
}
JSON:
{
"items":[
{
"item_id":1,
"quantity":3,
"price":30,
"title":"item1 new name"
},
{
"item_id":1,
"quantity":16,
"price":30,
"title":"Test Two"
}
]
}
Output with two errors about missing description fields:
[ {
"level" : "error",
"schema" : {
"loadingURI" : "#",
"pointer" : "/properties/items/items"
},
"instance" : {
"pointer" : "/items/0"
},
"domain" : "validation",
"keyword" : "required",
"message" : "missing required property(ies)",
"required" : [ "description", "item_id", "price", "quantity", "title" ],
"missing" : [ "description" ]
}, {
"level" : "error",
"schema" : {
"loadingURI" : "#",
"pointer" : "/properties/items/items"
},
"instance" : {
"pointer" : "/items/1"
},
"domain" : "validation",
"keyword" : "required",
"message" : "missing required property(ies)",
"required" : [ "description", "item_id", "price", "quantity", "title" ],
"missing" : [ "description" ]
} ]
Try pasting the above into here to see the same output generated.
I realize this is an old thread, but since this question is linked from jsonschema.net, I thought it might be worth chiming in...
The problem with your original example is that you're declaring "properties" for an "array" type, rather than declaring "items" for the array, and then declaring an "object" type (with "properties") that populates the array. Here's a revised version of the original schema snippet:
"items": {
"type": "array",
"items": {
"type": "object",
"properties": {
"item_id": {"type" : "number"},
"quantity": {"type": "number"},
"price": {"type" : "decimal"},
"title": {"type": "string"},
"description": {"type": "string"}
},
"required": ["item_id","quantity","price","title","description"],
"additionalProperties" : false
}
}
I would recommend against using the term "items" for the name of the array, to avoid confusion, but there's nothing stopping you from doing that...
Maybe your validator only supports JSONSchema v3?
The way required works changed between v3 and v4:
In v3 required is a boolean: https://datatracker.ietf.org/doc/html/draft-zyp-json-schema-03#section-5.7
In v4 required is an array of strings (like in your example): https://datatracker.ietf.org/doc/html/draft-fge-json-schema-validation-00#section-5.4.3
In Python, for me it worked like this:
schema = {
"type": "object",
"properties": {
"price": {"type": "number"},
"name": {"type": "string", 'required': True}, # set require = True
"details": {"type": "object"},
}
}

Facets tokenize tags with spaces. Is there a solution?

I have some problem with facets tokenize tags with spaces.
I have the following mappings:
curl -XPOST "http://localhost:9200/pictures" -d '
{
"mappings" : {
"pictures" : {
"properties" : {
"id": { "type": "string" },
"description": {"type": "string", "index": "not_analyzed"},
"featured": { "type": "boolean" },
"categories": { "type": "string", "index": "not_analyzed" },
"tags": { "type": "string", "index": "not_analyzed", "analyzer": "keyword" },
"created_at": { "type": "double" }
}
}
}
}'
And My Data is:
curl -X POST "http://localhost:9200/pictures/picture" -d '{
"picture": {
"id": "4defe0ecf02a8724b8000047",
"title": "Victoria Secret PhotoShoot",
"description": "From France and Italy",
"featured": true,
"categories": [
"Fashion",
"Girls",
],
"tags": [
"girl",
"photoshoot",
"supermodel",
"Victoria Secret"
],
"created_at": 1405784416.04672
}
}'
And My Query is:
curl -X POST "http://localhost:9200/pictures/_search?pretty=true" -d '
{
"query": {
"text": {
"tags": {
"query": "Victoria Secret"
}
}
},
"facets": {
"tags": {
"terms": {
"field": "tags"
}
}
}
}'
The Output result is:
{
"took" : 1,
"timed_out" : false,
"_shards" : {
"total" : 5,
"successful" : 5,
"failed" : 0
},
"hits" : {
"total" : 0,
"max_score" : null,
"hits" : [ ]
},
"facets" : {
"tags" : {
"_type" : "terms",
"missing" : 0,
"total" : 0,
"other" : 0,
"terms" : [ ]
}
}
}
Now, I got total 0 in facets and total: 0 in hits
Any Idea Why its not working?
I know that when I remove the keyword analyzer from tags and make it "not_analyzed" then I get result.
But there is still a problem of case sensitive.
If I run same above query by removing the keyword analyzer then I get the result which is:
facets: {
tags: {
_type: terms
missing: 0
total: 12
other: 0
terms: [
{
term: photoshoot
count: 1
}
{
term: girl
count: 1
}
{
term: Victoria Secret
count: 1
}
{
term: supermodel
count: 1
}
]
}
}
Here Victoria Secret is case sensitive in "not_analyzed" but it takes space in count, but when I query with lowercase as "victoria secret" it doesn't give any results.
Any suggestions??
Thanks,
Suraj
The first examples are not totally clear to me. If you use the KeywordAnalyzer it means that the field will be indexed as it is, but then it makes much more sense to just not analyze the field at all, which is the same. The mapping you posted contains both
"index": "not_analyzed", "analyzer": "keyword"
which doesn't make a lot of sense. If you are not analyzing the field why would select an analyzer for it?
Apart from this, of course if you don't analyze the field the tag Victoria Secret will be indexed as it is, thus the query victoria secret won't match. If you want it to be case-insensitive you need to define a custom analyzer which uses the KeyworkTokenizer, since you don't want to tokenize it and the LowercaseTokenFilter. You can define a custom analyzer through the index settings analysis section and then use it in your mapping. But that way the facet would be always lowercase, which is something that you don't like I guess. That's why it's better to define a multi field and index the field using two different text analysis, one for the facet and one for search.
You can create the index like this:
curl -XPOST "http://localhost:9200/pictures" -d '{
"settings" : {
"analysis" : {
"analyzer" : {
"lowercase_analyzer" : {
"type" : "custom",
"tokenizer" : "keyword",
"filter" : [ "lowercase"]
}
}
}
},
"mappings" : {
"pictures" : {
"properties" : {
"id": { "type": "string" },
"description": {"type": "string", "index": "not_analyzed"},
"featured": { "type": "boolean" },
"categories": { "type": "string", "index": "not_analyzed" },
"tags" : {
"type" : "multi_field",
"fields" : {
"tags": { "type": "string", "analyzer": "lowercase_analyzer" },
"facet": {"type": "string", "index": "not_analyzed"},
}
},
"created_at": { "type": "double" }
}
}
}
}'
Then the custom lowercase_analyzer will be applied by default to the text query too when you search on that field, so that you can either search for Victoria Secret or victoria secret and get the result back. You need to change the facet part and make the facet on the new tags.facet field, which is not analyzed.
Furthermore, you might want to have a look at the match query since the text query has been deprecated with the latest elasticsearch version (0.19.9).
I think this make some sense to my answer
https://gist.github.com/2688072