I have 100 pdf documents. I have used Watson document conversion service to convert pdf documents into JSON Answer Units. Now I need to train these documents.
I have written python code which needs JSON Answer Units and Document Relevency Score as input to Watson R and R. How to refer JSON Answer Units through python code or How to download JSON Answer Units from Document Conversion Service through Python API
I think you can see this example from IBM Developers (Node SDK).
This line show one example for refer JSON Answer units.
The programming language is other but you can use the same logic to do what you want, the Document Conversion integration example shows how to convert a document into Answer Units by using the Document Conversion Service and upload it to the Retrieve and Rank Service to make the Answer Units searchable.
Create a solr cluster, upload the solr configuration and create a collection
1.1 In the files retrieve_and_ran_lifecycle.v1.js and retrieve_and_rank_solr.v1.js you will find example functions
on how to perform these steps.
1.2 IMPORTANT: When uploading the solr configuration, use the [answer_unit_config.zip] from the resources folder,
which includes a schema.xml that defines the fields that will be indexed.
Edit the file document_conversion_integration.v1.js and enter the following:
2.1 service credentials for the Document Conversion and the Retrieve and Rank services (each service instance has a different set of credentials)
2.2 clusterId (obtained when creating the cluster)
2.3 collectionName and inputDocument if you are using a different value from the default
Run the following command:
node document_conversion_integration.v1.js
Related
I have an assignement to display flight offers from amadeus api to an html page. In order to do that I need to first edit the json from amadeus with apache spark. How would selecting a date in html call the api to receive json and how would this json be transfered to apache spark in order to be processed? What would be a high level overview of what I am supposed to do ?
I thought to use flask in order to execute a python script to store the json in a folder that is monitored by spark streaming but I don't know if that is a good idea.
I have backend APIs with multiple controllers which split up operations which are for 3rd parties, other are for frontend proxies / other micro-services and then there is an support/admin controller. I dont want all of these in the same APIM API / Product.
Currently either having to manually edit the OpenAPI def of the API before importing it into APIM or have to manually create the API in APIM and then using the dev tools extractor to export the templates for other environments.
My stack is
dotnet 5.0 / 6.0 (aspnet) with NSwag to document the API. Using the azure apim development toolkit to automate the bits we can.
I'd like to have a automated pipeline where the backend api is built, have separate OpenAPI defs or way to filter controller during importing, the output from that goes into a pipeline for APIM which then updates the dev environment APIM and then can be auto deployed into other environments when needed.
Does anyone else do this type of thing or do you put the whole API into a single APIM API/Product? Or do you have completely separate backend APIs that make up the a microservice? Or something else?
I want to share what we do at work.
The key idea is to have a custom OpenAPI processor (be it a command line tool so that you can call that in a pipeline) combined with OpenAPI Extension specs. The processor is basically a YAML or JSON parser based on your API spec format.
You create a master OpenAPI spec file that contains all the operations in your controller.
Create an extension, say x-api-product: "Product A", add this to the relevant API operations.
Your custom processor takes in the master spec file (e.g. YAML), and groups the operations by x-api-product then outputs a set of new OpenAPI specs.
Import the output files into APIM.
The thing to think about is how you manage the master API spec file. We follow the API Spec First approach, where we manually create and modify the YAML file, and use OpenAPI Generator to code gen the controllers.
Hope this gives you some ideas.
I am using Python SDK for OCI. I tried the Upload manager example and its working perfectly fine when i try to upload files from file system. But i have to expose this python code as REST service (using flask) and files to be uploaded to object storage will come as payload for REST. Does it have to multipart/mixed content type in this case or can it be multipart/form-data as well.
#user1945183, are you asking if Upload Manager can support multipart/form-data? Yes, Upload Manager can take in multipart/form-data.
The Upload Manager uses Object Storage's CreateMultipartUpload API. You can learn more about the CreateMultipartUpload API doc.
From CreateMultipartUploadDetails Reference you will find that content-type is optional and have no effect on Object Storage behavior.
The optional Content-Type header that defines the standard MIME type format of
the object to upload. Specifying values for this header has no effect on
Object Storage behavior. Programs that read the object determine what to do
based on the value provided. For example, you could use this header to
identify and perform special operations on text only objects.
I am trying to write a JSON exporter in GoLang using client_golang
I could not find any useful example for this. I have a service ABC that produces JSON output over HTTP. I want to use the client-golang to export this metric to prometheus.
Take a look at the godoc for the Go client, it is very detailed and contains plenty of examples. The one for the Collector interface is probably the most relevant here:
https://godoc.org/github.com/prometheus/client_golang/prometheus#example-Collector
Essentially, you would implement the Collector interface, which contains two methods: describe and collect.
describe simply sends descriptions for the possible Metrics of your Collector over the given channel. This includes their name, possible label values and help string.
collect creates actual metrics that match the descriptions from describe and populates them with data. So in your case, it would GET the JSON from your service, unmarshal it, and write values to the relevant metrics.
In your main function, you then have to register your collector, and start the HTTP server, like this:
prometheus.MustRegister(NewCustomCollector())
http.Handle("/metrics", promhttp.Handler())
log.Fatal(http.ListenAndServe(":8080", nil))
You mean you want to write an exporter for your own service using golang? The exporters listed on prometheus exporter page are all good examples, many of which are written in golang, you could pick a simple one like redis exporter to see how it's implemented.
Basically what you need to do is:
Define your own Exporter type
Implement interface prometheus.Collector, you can poll the json data from your service and build metrics based on it
Register your own Exporter to prometheus by prometheus.MustRegister
Start a HTTP server and expose metrics endpoint for prometheus to poll metrics
We need a JSON mapper from Type-A to Type-B ( i.e. JSON to JSON string). I'm aware of ESB tools which has mapping for XML to XML like IBM ESB.
So do we have any open source tool or paid application
Which has an editor to do mapping of JSON to other JSON , with capability to do some basic operations like formatting, etc
Can this transformation be exposed as REST service
If needed be, extract this transformation logic as JAR file and other team can use it
Thanks.
Manjesh,
I have good news for you. There is indeed an open source program that will accomplish this for you. Talend Open Studio (TOS) ESB (not to be confused with their TOS for Data Integration). Any ESB tool should do this quite easily. See below:
Image 1 shows in SoapUI where I am calling a REST service, passing the JSON prefix: team1, team: Giants is sent in. I return: Prefix: Cowboys are better than, Team: Giants. I could have done other manipulations (including changing the json structure) but put together a simple example.
The next image shows the Talend REST service implementation within Talend:
finally, I show the internals of the component (tXMLMap_2) where I manipulate the json data.