Is There is away make Fiware-cepheus Take more than one Configuration file as i wan't to sent Different Data type and i want to process it in real time but can't install different Cepheus Vm to each Data federation Is there is a way to make Cepheus take different Configuration Files
Fiware-Cepheus supports an (undocumented...) multi-tenant mode where multiple configurations can live together on the same instance (sharing available memory/CPU resources).
You can enable the multi-tenant profile from app.properties:
spring.profiles.active=multi-tenant
or at boot time with the --spring.profiles.active=multi-tenant command line argument.
Then for each request, you can add two HTTP headers Fiware-Service (default value: default) and Fiware-ServicePath (default value: /).
Fiware-Service header can only contain [A-Za-z0-9_] characters, Fiware-ServicePath header must start with a /.
Each tenant needs its own configuration setup (using the same headers) before it can start processing data.
For more information, please read related code.
Related
I have been asked to implement a centralized monitoring and logging system using DataDog that will receive information from various services and applications, some running as Windows Services on virtual machines and some running inside a Kubernetes cluster. In order to implement the logging aspect so that DataDog can correctly ingest the logs, I'm using Serilog to do the logging.
My plan is currently to write the logs to the console in json format and have the DataDog agent installed on each server or k8s node capture and ship them to DataDog. This works, at least for the k8s node where I've implemented it so far. (I'm trying to avoid using the custom Serilog sink for DataDog as that's discouraged in the DataDog documentation).
My problem is that I cannot get logs ingested correctly on the DataDog side. DataDog expects the json to contain a property call Message but Serilog names this property RenderedMessage (if I use JsonFormatter(renderMessage: true)) or #m (if I use RenderedCompactJsonFormatter()).
How can I get my logs shipped to DataDog and ingested correctly on the DataDog end?
Answering my own question.
The DataDog logging page has a Configuration section. On that page the "Pre processing for JSON logs" section allows you to specify alternate property names for a few of the major log message properties. If you add #m to the Message attributes section and #l to the Status attributes section you will correctly ingest JSON messages from the RenderedCompactJsonFormatter formatter. If you add RenderedMessage and Level respectively you will correctly ingest JsonFormatter(renderMessage: true) formatter. You can specify multiple attributes in each section, so you can simultaneously support both formats.
If you would use the Console Sink
just apply a corresponding colored Theme to it.
I am routing messages from an Azure IoT Hub to a blob container (Azure Storage as a routing endpoint). The messages sent to the IoT Hub are of Content Type: 'application/json' and Content Encoding: 'UTF-8'. However, when they arrive in blob storage several of these messages are batched together into one file with Content Type 'application/octet-stream'. Thus, for instance Power BI is not able to read these files in JSON format when reading directly from the blob.
Is there any way to route these messages so that each single message is saved as a json file in the blob container?
Tl;dr : Please make use of the Encoding option to specify AVRO or JSON format & Batch Frequency/Size to control the batch.
"With an Azure Storage container as a custom endpoint, IoT Hub will write messages to a blob based on the batch frequency and block size specified by the customer. After either the batch size or the batch frequency is hit, whichever happens first, IoT Hub will then write the enqueued messages to the storage container as a blob. You can also specify the naming convention you want to use for your blobs, as shown below."
The below image shows how we navigate to the IoTHub's message routing section to add a custom endpoint of a blob storage account.
-The below image shows how we configure the settings of the batch count and the size. Also please make use of the Encoding section to specify the message format such as AVRO or JSON
Please leave a comment below to let us know if you need further help in this matter.
The message encoding needs to be done by the device stream or as part of a module to translate the protocol. Each protocol (AMQP, MQTT, and HTTP) uses a different method to encode the message from base64 to UTF-8.
To route messages based on message body, you must first add property 'contentType' (ct) to the end of the MQTT topic and set its value to be application/json;charset=utf-8. An example is shown below.
devices/{device-id}/messages/events/$.ct=application%2Fjson%3Bcharset%3Dutf-8
https://learn.microsoft.com/en-us/azure/iot-hub/iot-hub-mqtt-support
I need to test a set of HTTP Tomcat servers using JMeter. I have a list of 'n' target servers and a list of 'n' JMeter remote test servers and a list of a single variable parameters that will be tested in the URL against each Tomcat server. However, each JMeter remote test server should test only a single target server with a set of threads cycling through all the parameters - so the test is 1 to 1 but I need to collate the results as the target servers form a CDN edge set with a common origin URL.
How could I ensure that each target server is tested by a single JMeter server using the same set of parameters? Using a CSV DATA SET CONFIG for the list of parameters is obvious but I can't see how I could use the same type of config element for the list of target servers.
The only way I can envision it is that I distribute a csv file containing a single unique target URL to each remote JMeter server. That way all the hundreds of threads on each JMeter server only know about one target but can use a single filename for the source URL. Does anyone know of a better way?
JMeter remove "slaves" basically execute the same test plan which is being specified on the "master" machine so default configuration is not something you can use.
There are 2 options:
You can modify user.properties file on each slave machine to contain the single unique endpoint (URL) like:
in user.properties file define a property which will specify the endpoint:
url=http://some.cdn.1
in the Test Plan use __P() function to read the value like:
${__P(url)}
So given different url property on each remote slave each slave will be hitting different endpoint. See Apache JMeter Properties Customization Guide for more information regarding JMeter properties and ways of working with them.
You can use __machipeIP() or __machineName() function in conjunction with the If Controller so switch execution branches basing on the "slave" IP address or hostname, something like:
If Controller, condition: ${__machineIP()} == "172.30.40.50"
do something specific for CDN1
If Controller, condition: ${__machineIP()} == "172.30.40.51"
do something specific for CDN2
etc.
I'm trying to create service, that writes input and output pipeline(xmldata) every time, that one of installed services is invoked. How my service should know, when other service is invoked? Is there any builtin tool to catch in&out pipeline and save to file or any other source?
You can use one of webMethods' build-in service in WmPublic package:
pub.flow:savePipelineToFile
Note that it's not recommended to use "savePipelineToFile" in production for obvious performance/resources reasons. And of course, to load the pipeline from the file use:
pub.flow:restorePipelineFromFile
The usual work flow to debugging in webMethods is:
Add the "savePipelineToFile" step as the first instruction in
your flow service.
Execute flow service or execute from some client that will trigger the flow service.
Disable the "savePipelineToFile" step in the flow service.
Add the "restorePipelineFromFile" step at the very top in your flow service.
Run webMethods Designer in Debug mode and step through. Once it
goes over the "restorePipelineFromFile" step, you should see the
entire content of the pipeline including the inputs as passed by the
client. If you continue stepping through your flow service step by
step then you should see the what the final output is.
The services "savePipelineToFile" and "restorePipelineFromFile" are very useful to debug your services. The "pipeline" file will be located at:
IntegrationServer\pipeline
If your requirements dictate that you need to produce xml flat files then use the following:
To serialize documents to xml (WmPublic package)
pub.xml:documentToXMLString
To write xml string to file (WmPublic package)
pub.file:stringToFile
Hope it helps!
EDIT:
By default the soap headers are hidden. Most of the time you don't need the soap headers but you can force webMethods to add the soap headers to the pipeline by doing the following.
1) Click on the web service descriptor.
2) In the properties panel, set "Pipeline Headers Enabled" to true.
The soap headers won't appear in your pipeline at design time. It will only exist at runtime. If you want to see the content of the soap header then you will need to use the "savePipelineToFile" / "restorePipelineFromFile" technique that I described previously.
To make the soap header appear at design time then you will have to do implicit mapping to the soap header fields as if they were there.
I am looking for a way to dump http request & reaponse body (json format) in resteasy on wildfly 8.2.
I've checked this answer Dump HTTP requests in WildFly 8 but it just dumps headers.
I want to see the incoming json message and outgoing one as well.
Can configuration do it without filter or any coding?
Logging HTTP bodies is not something frequently done. That's probably the primary reason for RequestDumpingHandler in Undertow only logging the header values. Also keep in mind that the request body is not always very interesting to log. Think for example of using WebSockets or transmitting big files. You can write your own MessageBodyReader/Writer for JAX-RS, and write to a ByteArrayOutputStream first, then log the captured content before passing it on. However, given the proven infeasibility of this in production, I think your mostly interested in how to do this during development.
You can capture HTTP traffic (and in fact any network traffic) using tcpflow or Wireshark. Sometimes people use tools such as netcat to quickly write traffic to a file. You can use for example the Chrome debugger to read HTTP requests/responses (with their contents).