Difference between edge policy and WAF policy - oracle-cloud-infrastructure

I am new user to oracle cloud infrastructure. I am trying to use WAF policy on my web application. There are edge and waf policy that seem to be the same.
So I wonder the difference of that two policy? Do I need to add both[edge,WAF] policy or either one?

LB-WAF it's OCI new version for the WAF that works as a Policy attached to a load balancer, the main difference is that the WAF service and the Load Balancer Service works side by side instead of separately as the previous version of the WAF,
check some of the main differences.
Web Application Firewall is OCI standard suite of features and countermeasures designed to help protect and enhance your Web Application against Layer 7 threats.
Web Application Firewall has two versions, Edge WAF (WAF v1) and Regional WAF (WAF v2/LB WAF), each serving a different purpose.
The main difference between them is that Regional WAF is specifically attached to an OCI Load Balancer and acts more so as an expansion to the LB providing additional security functionality and is recommended for customers with a strong regional presence, whereas Edge WAF (The version you are currently using) is used by customers that want to accommodate their already established infrastructure with additional security features.
WAF Rules in Regional WAF work differently compared to Edge WAF Regional WAF allows the use of optional condition functions, which as the name implies allows you to set particular conditions for a WAF rule to match, for example, “Country/Region in list” when if matched results in a pre-set action, "allow", "allow but log", or "return a specified HTTP response".
Other key differences
Both services have logging, for Regional WAF it has to be enabled manually.
Changes in Regional WAF propagate significantly faster due to them taking place in a Load Balancer compared to multiple servers across the world.
While Regional WAF Allows for rule filtering and similar functionality can be achieved with Edge WAF via the OCI CLI.
Regional WAF does not currently offer the "Bot Management" suite of countermeasures, however, the feature set is on the roadmap.
Regional WAF has "Advanced IP Rate Limiting", an improved version of "IP Rate Limiting" available in Edge WAF and configurable via CLI.
Caching Rules are not available in Regional WAF

Related

ECS Integration with AppDynamics Issue

Currently, I got a task about integrating ECS openshift with AppDynamics.
Here is my situation, I have Integrated my project with AppDynamics . I can’t see my project on appDynamics Dashboard, but I can see it on the Tier and node. i have checked the router for openshift,it’s not available ,so i want to ask you guys if it is the reason why i can not see my project on the AppDynamics Dashboard ?
If your Nodes are showing under "Tiers & Nodes" this means that the Agents are reporting to the AppDynamics Controller.
If however there is nothing shown in the Application (or Tier, or Node) Dashboards this means that there are no registered Business Transactions relating to that Application (or Tier, or Node).
Dashboards (or flow maps, more accurately) generally show a view of registered Business Transactions (not simply of entities which are known to the Controller).
Have a look at the docs for an explanation of what a Business Transaction is and how these can be configured should none be detected OOTB:
https://docs.appdynamics.com/21.2/en/application-monitoring/configure-instrumentation/transaction-detection-rules
https://docs.appdynamics.com/21.2/en/application-monitoring/business-transactions

Azure API Management and Microservice

Can Azure API Management fulfill the below requirements or do I need to use Application Gateway as well along with Azure API Management?
Route traffic to various microservices
Cope with traffic demands and scaling
Support API versioning
The microservices are hosted on Azure App services.
Thank you
Whether API Management can do this, depends on what you exactly mean by these requirements.
Route traffic to various microservices
As you mention the microservices are Azure Web Apps, I assume you mean different microservices to route to, based on a different endpoint.
You can route a request based on the contents to a certain endpoint.
Cope with traffic demands and scaling
Azure Web Apps are scalable by default, and the traffic manager takes care of it. APIM can only handle traffic demands and scaling to the platform itself. You can scale up or out, even automated based on rules. However, as scaling might take some time it's recommended to monitor the capacity metric to accomodate for increasing load.
Support API versioning
APIM is 'just' an virtualization layer between the customer and the API. So having API versioning on APIM only makes sense when you actually do versioning on the API. In APIM you can create version sets which specify the versioning strategy for the API, based on header, path or querystring. An API can be deployed in APIM based on the version set, which makes it a versioned API.

VPN connection between 2 google instances across google project

We have a technical requirement, wherein we need to have two instances across different google project (but under same google account) to communicate with each other.
To illustrate, we have two google project X and Y under the same google account. We need all our VM instance on project X (of google account) to have a reliable communication(maybe over HTTP) with a known VM instance of project Y(of the same google account).
Because we programmatically scale up and scale down our VM instances of project X we cannot approach the solution of whitelisting IPs of VM instances of X on Project Y firewall. (under google networking rule)
We have been reading around VPN in google,(sure and easier solution to this could be a Proxy but because of our business constraints, we cannot explore that as possible options).
Google documentation says about VPN setup of two kind static and dynamic route.
But, because of our limited experience(never set the VPN before) on this we are not sure what is the ideal VPN setup(or there exist any other solution which we haven't encountered yet) for us.
Can some one please help with some pointer on the correct setup for above problem.
You can use VPN to connect your two projects. Both static or dyanmic will work. Dynamic requires you to set up Cloud Routers in both projects and some additional configuration and allows your network to automatically react if the VPN is down. However I don't think this is helpful for you as you don't have a fallback, so static VPN is probably the better choice for you.
You may also want to look at VPC Network Peering and Shared VPC. These allow instacnes in different projects to communicate without needing a VPN at all.

Vulnerability Scan Authorization for Google Compute

What is the official and required process to perform our own independent vulnerability scans against virtual machines in the Google Compute Engine? These will be Penetration tests (our own) that will scan the public IP for open ports and report results back to us.
Microsoft Azure requires authorization and so does Amazon. Does Google?
No, Google does not need to be notified before you run a security scan on your Google Compute Engine projects. You will have to abide by Google Cloud Platform Acceptable Use Policy and the Terms of Service.
Please also be aware of Google's Vulnerability Rewards Program Rules.
By default all incoming traffic from an outside network is blocked. Each customer has the responsibility to create the appropriate rules to allow access to the GCE instances as he considers appropriate:
https://cloud.google.com/compute/docs/networking#firewalls
If you sign up for a trial you can perform that test over your own project. Overall security configuration is up to the owner of the project and does not reside on Google.
In regards to internal infrastructure Google has its own security teams working 24x 7 to assure it keeps on the vanguard in the best security practices. http://googleonlinesecurity.blogspot.ca/

Cloudflare NS outage and having non Cloudflare NS3?

Due to rising server load from DOS attacks I've decided to use Cloudflare, but I am aware their users suffered an hours outage in March because all Cloudflare Name Servers were "down".
I have my own NS, can I retain this as NS3 for the domain (for fallback) alongside Cloudflares "NS1 & NS2"?
What would the impact be?
I am aware Nameservers aren't selected in number order but I believe it is likely that Cloudflares commonly used NS in the client's locale are likely to be selected first - so only a small portion of traffic would use my NS3 (without the benefits of Cloudflare services). Is this correct or just wishfull thinking on my part?
Please do consider opening a support ticket for these kind of questions at CloudFlare directly.
You won't be able to leave additional name servers in place, as in ns3 and n4 -- in addition to the 2 provided CloudFlare name servers. To use our service you'd need to only have our 2 name servers in place.
Your assumption is correct, 33% of traffic will be served by your NS.
You can TCP/UDP stream proxy with n* nginx/haproxy/dnsmasq to n* NS backend to load balance incoming requests.
I tested several dnsmasq scenarios, you can serve 15k r/s with a simple Celeron up to 150k r/s with redundant, cheap, geo distributed VPSes. You can also use AWS Route53 to have solid perfs.
In addition to that you can spin up n* nginx proxy manager instances with WAF enabled and SSL wildcard certificates to match most of the "Cloudflare edge logic" and protect/mask your origin ip addresses even if 1/3rd of the clients resolves origins from your own NS.