At what scope are Azure event log IDs unique? - azure-api-management

I have an application that monitors activity logs from multiple different and unrelated Azure platforms using Microsoft’s Management Activity API. According to the Common Schema documentation, event IDs are “Unique identifier[s] of an audit record.” but it does not specify a scope. Are they globally unique across all Azure instances, or is it possible I will have an ID collision between two unrelated instances?
Thanks!

GUIDs can be assumed to be unique, and the chance of collision is highly unlikely. Refer this SO answer that covers this eloquently.

Related

How to deal with NGSI-LD tenants ? Create ? List ? Delete?

I have a hard time trying to find concrete information about how to deal with tenants in a NGSI-LD Context Broker.
The ETSI specification defines multi-tenancy but it seems it doesn't specify any operations to create a tenant, list available tenants and remove a tenant.
I presume each broker is free to implement multi-tenancy its own way but I've searched the tenant keyword in broker documentations (at least for Orion-LD, Stellio and Scorpio) with no success.
Thanks to this stackoverflow post I've successfully created a tenant in Orion-LD.
I'd like to know if there are some tenants operations (documented or undocumented) exposed by brokers.
Especially any facilities to remove a tenant - along with all resources that have been created "inside".
Thanks.
So, first of all, tenants are created "on the fly". In comes a request to create an entity, subscription, or registration, and the HTTP header "NGSILD-Tenant" specifies a tenant that does not already exists, then the tenant is created and the entity/sub/reg is created "under it". For ALL operations, the HTTP header is used. jsonldContexts are different, those are "omnipresent" and exist regardless of the tenant used.
GET operations (like all other operations) can use the NGSILD-Tenant header to indicate on which tenant the GET is to be performed, but if that tenant does not exist, it will naturally not be created - an error is retuirned instead.
There are no official endpoints to list tenants (nor delete - that would be a bit dangerous!), but in the case of Orion-LD, I implemented an endpoint for debugging purposes: GET /ngsi-ld/ex/v1/tenants. That one you can use if you please. Just remember, no other NGSI-LD broker is supporting that endpoint

I want to know what Orion's Federation feature is

I want to know what Orion's Federation feature is.
I have read the Orion documentation and tried the federation function, and the data was registered in all three Orions.
I thought that the second Orion acts as a Proxy and does not store data, but is that not the case?
If all three Orions store data, is it correct to say that the Federation does not have its own function, but is a concatenation of Subscriptions?
The Federation mechanism you refer is the one on the documentation. In this case, it is based on subscriptions that copy the data among the brokers on entities changes.
Orion also has the registration and request forwarding mechanism. This case, once the register is done, the Context Broker forward the requests to the one registered. This approach sounds closer to the one you are describing but I encourage you to use the first method (based on subscriptions) since all the advanced operations like filtering are working without issues.
A well-designed Federation should not allow value shadowing by a local database. Unfortunately, that's not a feature offered by design by the Orion Broker. That is up to your design and nothing prevents the shadowing, as far as I understand i.e. even you have registered a provider for an Entity, someone can create locally such an Entity, which may be a mess and a source of conflict.
An alternative that may work is to perform the Federation by a REST Service custom built by you that just acts as a proxy against different Orion Brokers. If this custom service is a single entry point you could guarantee a pure Federated system with no shadowing of values.

Migrating previously collected datasets to FIWARE backend

Having at hand, the task of migrating previously collected environmental datasets (weather, airquality, noise etc) from sensors deployed in different locations, and stored in several tables of MySQL database, to my instance of fiware Orion CB, and thus persisted to fiware backend.
The challenges are many:
the data isn't stored in fiware standards, so must be transformed according to the fiware data models.
not all tables are a good candidates of being transformed to an Entity.
some Entities need have field values from several tables as attributes. For instance, defining AirQualityObserved Entity-type would have attributes from these tables: airquality, co, co2, no2 and deployment. So mapping these attributes to a particular Entity-type is a challenge.
As this is a one-time upload (not live data), I am thinking of two possibilities to go about it.
Add an LwM2M client, to keep sending data to an IoTAgent and eventually passed to Orion CB until the last record.
Create a Python script that "pretends" to be a contextProvider to the Orion instance, sending data (say every 5sec) until the last record.
I have not come across a case in my literature search that addresses such a situation. Is there any recommendations from FIWARE Foundation for situations similar to this?
How would you suggest about data fields --> Entity's attributes mapping that actually need be combined from several tables?
IOTA usage makes sense when you have live data (I mean, a real device sending information to the FIWARE platform). However, you say this is a one-time upload, so the Python script option seems better this case.
(A little terminological comment here: your script will take the role of context producer. A context provider is a different actor, related with registrations and query/update forwarding. See this piece of documentation for additional detail).
With regards to the data fields to Entity's attributes mapping I don't have any particular suggestion. This is just a matter of analyzing the data model (i.e. entity attributes) and find how to set that information from your data in the tables.

Microservice Architecture design

I have few doubts on Microservice architecture.
Lets say there are microservices A, B and C.
A maintains the context of a job apart from other things it does and B,C work to fulfill that job by doing respective tasks for that job.
Here I have questions.
1. DB design
I am talking about SQL here.
Usage of foreign keys simplifies lot of things.
But as I understand microservice architecture, every microservice maintaines its own data and data has to be queried from that service if required.
Does it mean no foreign keys referring to tables in another microservices?
2. Data Flow
As I see here are two ways.
All the queries are done using jobId maintained uniquely in all microservices for a job.
Client requests go directly to individual service for a task. To get summary of the job, client queries individual microservices collects the data and passes to user.
Do everything through coordinating microservice. Client requests go to service A and in tern service A will gather info from all other microservices for that jobId and passed that to user.
Which of the above two has to be followed and why?
You're correct thinking that microservices should ideally have their own data structure so they can be deployed independently. However there are several design patterns that help you, and that doesn't necessarily translates in "No FK". Please refer to:
Database per service
Sagas
API Composition
CQRS
The patterns listed above answer both your questions.
Does it mean no foreign keys referring to tables in another microservices?
Not in the database sense. One microservice may hold IDs of remote entities but should not assume anything about the remote microservice persistence (i.e. the database type, it could be anything from SQL to NoSQL).
Which of the above two has to be followed and why?
This really depends. There are two types of architectures: choreography and orchestration. Both of them are good. Which one to use? Only you can decide. Here are a few blog posts about them:
Microservices — When to React Vs. Orchestrate
Benefits of Microservices - Choreography over Orchestration, Low Coupling and High Cohesion
Also, the solution to this SO question might be useful.

Can MS Enterprise Library Logging be used for multiple applications?

I'm wondering if its - a) possible; b) good practice - to log multiple applications to single log instance?
I have several ASP.NET apps and I would like to aggregate all exceptions to a centralized location that can be queried as part of an Enterprise Dashboard app. I'm using both the EL logging block and the EL exception blog along with the Database Trace Listener. I would like to see exceptions across all apps logged to a single db.
Any comments, best practice guidelines or answers would be extremely welcome.
Yes, it is definitely possible to store multiple application logs in a central location using EL.
An Enterprise Dashboard application that lets you view exceptions across applications and tiers, and provides reporting is a great reason to centralize your logging. So I'll say yes to question b as well.
Possible Issues/Negatives
I'm assuming that you are using the Database Trace Listener since you mention that in your question. If there are a large number of applications logging a large number of log entries combined with users querying the (potentially large) log database, there is the potential for performance to degrade (since the logging is done synchronously) which could impact your application performance.
Another Approach
To mitigate against this possibility, I would investigate using the Distributor Service to log asynchronously. In that model, all of the applications would log to a message queue (using the MSMQ Trace Listener). A separate service then polls the queue and would forward the log entries to a trace listener (in your case a Database Trace Listener) which would persist the messages in your dashboard database. This setup is more complicated. But it does seem to align with what you are trying to achieve and has some other benefits such as asynchronous processing and the ability to log even if the dashboard database is down (e.g. for maintenance).
Other Considerations
You may also want to think about standardizing some LogEntry properties across applications. For example, LogEntry doesn't really have an "application" property so you could add an ExtendedProperty to represent the application name. Or you may standardize on a specific format for the Message property so that various information can be pulled out of the message and stored in separate database columns for easier searching and categorization.