OCI(Oracle cloud infrastructure) logs SIEM integration - oracle-cloud-infrastructure

Has anyone integrated OCI(Oracle cloud infrastructure) audit logs & OCI service logs to a Security Information and Event Management tool (SIEM,Arcsight).
If yes , Where are these logs stored and from where these logs can be instantly accessed.

There are a couple of ways you can approach the problem.
Audit logs are available via Rest API and SDKs. You can call ListEvents documented here , to retrieve the audit logs. The call will return AuditEvent object as the body. This can be then parsed and ingested in the SIEM.
Alternatively you can raise a bulk export request for Audit log events and you can have them in Object Store bucket, from where the raw files can be pulled and ingested in SIEM.
Similarly you can export the service logs of your choice to an Object Store bucket and retrieve them ingest in SIEM.
Disclosure: I currently work for Oracle, though not directly on Audit/Logging service. Thoughts my own.

its a bit late answer but I will leave it here for the benefit of everyone.
There is an architecture pattern that has been created on how to push different types of logs to external sources (one of them is SIEM).
Over the architecture center you can find the details of the pattern mentioned for QRadar but it is valid for any kafka compatible SIEMs.
https://docs.oracle.com/en/learn/oci_ibm_qradar/index.html#configure-ibm-qradar

There are multiple ways you can access the logs.
Service Connector >> Object storage >> OCI CLI - to read.
Service Connector >> Streams (public) >> Kafka consumer - write to a file .
Log analytics - List audit events on OCI Cli commands .
Eventually they need to be parsed. ArcSight has a JSON folder parser (Flex connector) which can be used for parsing this log.

Related

Azure APIM - import single controller from an API or split controllers from single API into different APIM APIs

I have backend APIs with multiple controllers which split up operations which are for 3rd parties, other are for frontend proxies / other micro-services and then there is an support/admin controller. I dont want all of these in the same APIM API / Product.
Currently either having to manually edit the OpenAPI def of the API before importing it into APIM or have to manually create the API in APIM and then using the dev tools extractor to export the templates for other environments.
My stack is
dotnet 5.0 / 6.0 (aspnet) with NSwag to document the API. Using the azure apim development toolkit to automate the bits we can.
I'd like to have a automated pipeline where the backend api is built, have separate OpenAPI defs or way to filter controller during importing, the output from that goes into a pipeline for APIM which then updates the dev environment APIM and then can be auto deployed into other environments when needed.
Does anyone else do this type of thing or do you put the whole API into a single APIM API/Product? Or do you have completely separate backend APIs that make up the a microservice? Or something else?
I want to share what we do at work.
The key idea is to have a custom OpenAPI processor (be it a command line tool so that you can call that in a pipeline) combined with OpenAPI Extension specs. The processor is basically a YAML or JSON parser based on your API spec format.
You create a master OpenAPI spec file that contains all the operations in your controller.
Create an extension, say x-api-product: "Product A", add this to the relevant API operations.
Your custom processor takes in the master spec file (e.g. YAML), and groups the operations by x-api-product then outputs a set of new OpenAPI specs.
Import the output files into APIM.
The thing to think about is how you manage the master API spec file. We follow the API Spec First approach, where we manually create and modify the YAML file, and use OpenAPI Generator to code gen the controllers.
Hope this gives you some ideas.

Configuring a linked service between zoho (CRM) and Azure Data Factory

I am trying to configure a linked service in Azure Data Factory (ADF) in order to load the ZOHO data to my SQL database. I am using a REST API linked service for this which successfully connects to ZOHO. I am mainly struggling to select the proper relative URL. Currently I use the following settings:
Base URL: https://www.zohoapis.eu/crm/v2/
Relative URL: /orgxxxxxxxxxx
When I try to preview the data, this results in the following error:
Error occurred when deserializing source JSON file ''. Check if the data is in valid JSON object format.
Unexpected character encountered while parsing value: <. Path '', line 0, position 0.
Activity ID: 8d32386a-eee0-4d2a-920a-5c70dc15ef06
Does anyone know what I have to do to make this work such that I can load all required Zoho tables into ADF?
As per Microsoft official document:
The Zoho connector is supported for the following activities:
1. Copy activity with supported source/sink matrix
2. Lookup activity
REST API doesn't support Zoho Connector.
You are expecting a JSON file as source but your Sink is Azure SQL Database. You need to change your approach.
You can use directly use Zoho Linked Service in ADF to fetch the data from zoho (CRM).
You can copy data from Zoho to any supported sink data store. This
connector supports Xero access token authentication and OAuth 2.0
authentication.
You need to use Copy activity to fetch the data from Zoho (CRM) and store it in Azure Storage account. Once data received, you need to Denormalize the JSON file to store it in Azure SQL Database. Use data flow activity to denormalize the data using flatten transformation.
Use the flatten transformation in mapping data flow to take array values inside
hierarchical structures such as JSON and unroll them into individual
rows. This process is known as denormalization.
Learn more here about Flatten transformation in mapping data flow.
Once flatten done, you can store the data in Azure SQL Database.

How to get Serilog json-formatted logs to appear correctly in Datadog

I have been asked to implement a centralized monitoring and logging system using DataDog that will receive information from various services and applications, some running as Windows Services on virtual machines and some running inside a Kubernetes cluster. In order to implement the logging aspect so that DataDog can correctly ingest the logs, I'm using Serilog to do the logging.
My plan is currently to write the logs to the console in json format and have the DataDog agent installed on each server or k8s node capture and ship them to DataDog. This works, at least for the k8s node where I've implemented it so far. (I'm trying to avoid using the custom Serilog sink for DataDog as that's discouraged in the DataDog documentation).
My problem is that I cannot get logs ingested correctly on the DataDog side. DataDog expects the json to contain a property call Message but Serilog names this property RenderedMessage (if I use JsonFormatter(renderMessage: true)) or #m (if I use RenderedCompactJsonFormatter()).
How can I get my logs shipped to DataDog and ingested correctly on the DataDog end?
Answering my own question.
The DataDog logging page has a Configuration section. On that page the "Pre processing for JSON logs" section allows you to specify alternate property names for a few of the major log message properties. If you add #m to the Message attributes section and #l to the Status attributes section you will correctly ingest JSON messages from the RenderedCompactJsonFormatter formatter. If you add RenderedMessage and Level respectively you will correctly ingest JsonFormatter(renderMessage: true) formatter. You can specify multiple attributes in each section, so you can simultaneously support both formats.
If you would use the Console Sink
just apply a corresponding colored Theme to it.

Integration Server - save pipeline to database

I'm trying to create service, that writes input and output pipeline(xmldata) every time, that one of installed services is invoked. How my service should know, when other service is invoked? Is there any builtin tool to catch in&out pipeline and save to file or any other source?
You can use one of webMethods' build-in service in WmPublic package:
pub.flow:savePipelineToFile
Note that it's not recommended to use "savePipelineToFile" in production for obvious performance/resources reasons. And of course, to load the pipeline from the file use:
pub.flow:restorePipelineFromFile
The usual work flow to debugging in webMethods is:
Add the "savePipelineToFile" step as the first instruction in
your flow service.
Execute flow service or execute from some client that will trigger the flow service.
Disable the "savePipelineToFile" step in the flow service.
Add the "restorePipelineFromFile" step at the very top in your flow service.
Run webMethods Designer in Debug mode and step through. Once it
goes over the "restorePipelineFromFile" step, you should see the
entire content of the pipeline including the inputs as passed by the
client. If you continue stepping through your flow service step by
step then you should see the what the final output is.
The services "savePipelineToFile" and "restorePipelineFromFile" are very useful to debug your services. The "pipeline" file will be located at:
IntegrationServer\pipeline
If your requirements dictate that you need to produce xml flat files then use the following:
To serialize documents to xml (WmPublic package)
pub.xml:documentToXMLString
To write xml string to file (WmPublic package)
pub.file:stringToFile
Hope it helps!
EDIT:
By default the soap headers are hidden. Most of the time you don't need the soap headers but you can force webMethods to add the soap headers to the pipeline by doing the following.
1) Click on the web service descriptor.
2) In the properties panel, set "Pipeline Headers Enabled" to true.
The soap headers won't appear in your pipeline at design time. It will only exist at runtime. If you want to see the content of the soap header then you will need to use the "savePipelineToFile" / "restorePipelineFromFile" technique that I described previously.
To make the soap header appear at design time then you will have to do implicit mapping to the soap header fields as if they were there.

wso2 API Manager and BAM - How to control API invocation?

How can I retrieve the number of API invocations? I know the data has to be somewhere because wso2 BAM shows piecharts with similar data...
I would like to get that number in a mediation sequencel; is that possible? Might this might be achieved via a DB-lookup?
The way how API Usage Monitoring in WSO2 API Manager works is, there is an API handler (org.wso2.carbon.apimgt.usage.publisher.APIUsageHandler) that gets invoked for each request and response passing through the API gateway. In this handler all pertinent information with regard to API usage is published to the WSO2 BAM server. The WSO2 BAM server persists this data in Cassandra database that is shipped with it. Then there is a BAM Toolbox that has been packaged with required analytic scripts written using Apache Hive that can be installed on the BAM server. These scripts would summarize the data periodically and persist the summarized data to an sql database. So the graphs and charts shown in the API Publisher web application are created using the summarized data from the sql database.
Now, if what you require is extractable from these summarized sql tables then i suppose the process is very straight forward. You could use the DBLookup mediator for this. But if some dimension of the data which you need has been lost due to the summarizing, then you will have a little more work to do.
You have two options.
The easiest approach which involves no coding at all would be to write a custom Hive script that suits your requirement and summarize data to a sql table. Then, like before use a DBLookup mediator to read the data. You can look at the existing Hive scripts that are shipped with the product to get an idea of how it is written.
If you dont want BAM in the picture, you still can do it with minimal coding as follows. The implementation class which performs the publishing is org.wso2.carbon.apimgt.usage.publisher.APIMgtUsageDataBridgeDataPublisher. This class implements the interface org.wso2.carbon.apimgt.usage.publisher.APIMgtUsageDataPublisher. The interface has three instace methods as follows.
public void init()
public void publishEvent(RequestPublisherDTO requestPublisherDTO)
public void publishEvent(ResponsePublisherDTO responsePublisherDTO)
The init() method runs just once during server startup. Here is where you can add all your logic which is needed to bootstrap the class.
The publishEvent(RequestPublisherDTO) is where you publish request events and publishEvent(ResponsePublisherDTO) is where you publish response events. The DTO objects are encapsulated representations of the request and response data respectively.
What you will have to do is write a new implementation for this interface and configure it as the value for DataPublisherImpl property in api-manager.xml. To make things easier you can simply extends the exsiting org.wso2.carbon.apimgt.usage.publisher.APIMgtUsageDataBridgeDataPublisher, write your necessary logic to persist usage data to an sql database within the init(), publishEvent(RequestPublisherDTO) and publishEvent(ResponsePublisherDTO) and at the end of each method just call its respective super class method.
E.g. the overriding init() will call super().init(). This way you are only adding the neccessary code for your requirement, and leaving the BAM stat collection requirement to the super class.