How do I integrate my application to an HL7 V2.5 hospital HIS? - integration

My small company provides hospitals with a bug prevalence report. Earlier, we had only 1 customer who provides the necessary data for the application (patient demographics, culture details, antibiotic dosage etc) in a CSV dump for us to ingest. Now, we have larger hospitals interested in our product who have HL7 V2.5. I've found that people use Mirth connect for interfacing but very little on how this entire setup works.
As a vendor, what are the things I need to do to make the integration possible?
From what I've read one of the approaches is the following:
Set up a https server. Install and run mirth on it.
The hospital will now send HL7 messages (are these text files?) to this server on a Mirth channel. Mirth can help me parse these messages and extract the data.
I build further processing mechanisms to ingest that data into my application that the hospital will use.
Also, what is the standard followed when hospitals send HL7 messages? Do the hospital systems provide a consolidated HL7 file for the required data? Or will I as the vendor have to collect separate files and parse them to convert data into usable format?

The usual process for receiving HL7 data is to:
Stand up an integration engine such as Mirth, Lyniate Rhapsody, or Infor Cloverleaf.
Establish a TCP/IP connection (an HL7v2 interface) to send and receive HL7 messages with the other software system. HL7v2 connections are usually made through a VPN to provide additional security, since the transport protocol (MLLP) does not have any native security.
Configure your integration engine to parse and convert the messages into a format your application can understand.
There are alternatives such as HL7v2 over web services or via SFTP, but these aren't as common. HL7v2 messages aren't files, unless you are using the SFTP process to actually download/upload messages. Each HL7v2 message represents a single event and are almost always transmitted individually in near-real-time.
Some potentially helpful resources are here and here.

Related

Watson training data format

I want Watson to train on certain data the user will provide in my web app - the data will be posted through forms.
My question is - which service in IBM Cloud matches this the best? I've tried Discovery, but it doesn't seem to be the best match for my request, first of all it does not want to accept .json or Excel formatted files, which seems like a red flag to me (concerning what I am looking for).
My ultimate goal is for Watson to learn the patterns and ultimately start providing suggestions for the user.
My data I give to Watson would look like this, .json format:
{
"songName" : "Beyond the sea",
"artist" : "Bobby Darin",
"genre" : "jazz"
}
Thank you in advance.
I've setup my IBM Cloud, enabled Discovery as a service, attempted to upload .json and excel files, from which both have been rejected.
I expected Watson to process the provided structured data, find patterns, and provide intelligent suggestions.
If you are planning to build and train a model, then Watson Machine Learning service on IBM Cloud is what you are looking for. After you build and train a model, you deploy it and put it into production, so you can pass data to the model and get scoring data, also known as predictions, back.
You can access your deployment through an API endpoint, using the Watson Machine Learning Python client, CLI, or REST API (manually or in an application) to analyze data.
To know more about deploying your model, refer this the documentation here
Also, there is a code pattern that serves as an example of how to build a Machine Learning recommendation engine using Jupyter notebooks on Watson Studio

How to exchange data between MySQL and Parse.com?

We plan to use a MySQL database as the backend for our (Java or Ruby on Rails) based web application. After completing the web application, we want to port the application to iOS and Android.
We want to be able to run the application in "native" mode - that is, if a network connection is not available to the smart-phone, the system should be able to store the data locally, and sync with the backend when the network connection becomes available.
The best kind of framework for this kind of syncing is a library/framework such as Parse.com.
The question to which we want an answer is: is it possible to exchange data between the web application data stored in MySQL, and the Parse.com data which is stored in a proprietary format on the Parse servers?
Answer to your question: It is indeed possible, but syncing data is an advanced topic.
However, you also state that you want to access the parse data when offline. As I understand it, you want Parse to handle the offline state, and then sync to MySQL when connection is back up. Parse does not offer functionality to store data offline, other than caching requests. You probably need another service for your specific needs.
I might have misunderstood the use case. If so, my alternate understanding would be that ALL data for the smart phones will be handled by parse: both offline and online, with syncing. The answer is still: parse does not offer this kind of functionality.

how to use FUSE ESB glue multiple applications

i have two applications, one deployed on server 1. the other deployed on server 2. application one want to post some date to application two, when application two process complete. it will send a event to application one, my customer suggest us to use FUSE ESB, how to implement it ,
any answer is appreciate
You can use messaging for that. Fuse ESB comes with ActiveMQ as the messaging solution out of the box. Then you can have application A send a message to a queue, which application B pickup, and send back a response on a response queue, for application A to receive.
You can also use other kind of transports such as HTTP, web service, TCP, and many many more.
Fuse ESB comes also out of the box with Apache Camel which has a lot of transports and components ready to use. See the list here: http://camel.apache.org/components
I suggest you read the Fuse ESB production introduction guide at: https://access.redhat.com/knowledge/docs/en-US/Fuse_ESB_Enterprise/7.1/html-single/Product_Introduction/index.html

Best practice - transfer data between web services using esb

I would like to ask you about best practise in sending data (POST/GET variable) between two web services, where between them is ESB:
WEB_SERVICE1 <-----------> ESB <----------> WEB_SERVICE2
Should I create another webservice in ESB, which will transfer data between WEB_SERVICE1 and WEB_SERVICE2?
Translations within the ESB is how you should transfer data from one web service to another.
You should leverage the ESB to do the communication between the two.
You generally use translators/mappers provided by the ESB framework to facilitate the translation/formats of data coming in and out.
Web Service 1 pushes message to ESB
ESB reads post/get data, formats data to meet Web Service 2 demands
ESB redirects/post to Web Service 2
Edit
You might want to give us some more info on how you are planing on using these. Are you just trying to call one service from another ? Or are you trying to do something more
If you just want to not worry about p2p, then ideally you would have webservice 1 push a message to the message bus, the message bus would pick it up and, translate it, and deliver it to web service 2 (or any other subscribers).
Take a look a message endpoints in the Fuse Integration Patterns document

How to process emails arbitrarily as they come in?

I am trying to design a system that will catch emails that are submitted to a server and process them. The processing will probably mean parsing metadata such as ip addresses, the email body and subject etc., and submitting these details to a web service.
I'd like to know what software (SMTP servers or similar) can either be integrated with, to perform arbitrary processing, or which servers will support forwarding to a web service (with queuing and retrying) out of the box.
Exchange is not a preferred option, as I'd like to keep that off the live servers.
It's probably easiest to just use any mail server and just process messages by pulling them directly out of the mailbox of the email user(s) on the system via IMAP or POP3 or something similar. Some mail servers are built with 3rd party access in mind where can register for events on new mail arriving so you don't have to poll for new messages. Different mail servers have different native access protocols and APIs. Exchange has IMAP and Exchange Web Services (EWS), Domino has C++/COM APIs, GroupWise a web service. And all are going to support some kind of default client access protocols like IMAP and POP3. The native protocols will give more features (like notifications), but for your purposes IMAP or POP3 may be enough.
There are a ton of ways to do this. A lot of this can also depend upon price, client, and the tech environment.
If this is a MS environment, and cost is a factor, one way you can do this is to use the built in IIS SMTP Service. The IIS SMTP service is most commonly used for sending emails, however, it can be configured to actually accept email. If you configure the service for a domain, all incoming email for that domain is placed in the mailroot/drop directory. It's placed as a text file ({guid}.eml format).
You can just use a Mime parser to parse these files, and then perform any business rules you want to on them. Since this is done at the filesystem level, you won't have to worry about intercepting any network calls. Just grab a file, parse it, and move to the next file. Then, have your app sleep for X seconds, and then check to see if any new emails have come in.
PS: shameless plug -- you can use aspNetMime for parsing these files, and extracting the data.
It might be worth your while to implement a simple lexing/parser to parse the header and look out for a specific header information. If it is an IP address, you may get away with it by using a regex parser.
Ideally, you should connect to the POP3 port of the server where the emails are stored and do a quick scan of the information, if the subject line or the message contains a specific string indicating that it is the email or even the IP address within the header, then you can retrieve that email and process it, this is where I think the lexing/parsing of the email would be done initially prior to pulling it down based on the criteria.
Bear in mind, you do not want to pull down an email that is not relevant, then how would you deal with it.
Maybe DotNetOpenMail might help, as it does not necessarily pull down all emails...
Hope this helps,
Best regards,
Tom.