We have a continuous deployment pipeline that is setup to deploy to a chain of environments.
In the portal, all I can see is the following :
How do I configure the portal such that it also shows :
How long each release took, split by environment
Historic graph which show release time taken over time, split by environment
Failures split by environment over time
You can’t configure VSTS to show these, there isn’t such feature available in VSTS. Also, there isn’t the official extension can show them.
Some widgets can add release information to dashboard, but don’t meet your requirement. (Add release information to the dashboard)
You can get such information through Release REST API: Get a release (timeToDeploy value of each environment), so you can build an application (e.g. VSTS extension) to statistics data and show them.
Related
Currently, I got a task about integrating ECS openshift with AppDynamics.
Here is my situation, I have Integrated my project with AppDynamics . I can’t see my project on appDynamics Dashboard, but I can see it on the Tier and node. i have checked the router for openshift,it’s not available ,so i want to ask you guys if it is the reason why i can not see my project on the AppDynamics Dashboard ?
If your Nodes are showing under "Tiers & Nodes" this means that the Agents are reporting to the AppDynamics Controller.
If however there is nothing shown in the Application (or Tier, or Node) Dashboards this means that there are no registered Business Transactions relating to that Application (or Tier, or Node).
Dashboards (or flow maps, more accurately) generally show a view of registered Business Transactions (not simply of entities which are known to the Controller).
Have a look at the docs for an explanation of what a Business Transaction is and how these can be configured should none be detected OOTB:
https://docs.appdynamics.com/21.2/en/application-monitoring/configure-instrumentation/transaction-detection-rules
https://docs.appdynamics.com/21.2/en/application-monitoring/business-transactions
I have a static site (Gatsby) that builts with GitHub Actions and uses data that is organized in Contentful. The content changes frequently in a row (like 10 changes within 10 minutes) and this currently results in the page being recreated multiple times in a row for no reason.
Is there any simple mechanism (in GitHub or Contentful) that can be used handle this issue?
If not, what might be useful approaches to handle this problem?
Contentful DevRel here. 👋
Depending on the needs, I see people implementing static regeneration in different ways.
Rebuild after triggered web hooks
Define and send auto-save or publish webhooks from Contentful to your build server to trigger a regeneration. As you described this can lead to a lot of rebuilds depending on how busy the users in your Contentful space are.
Add a build trigger to the Contentful UI
Contentful's App framework lets you extend the contentful interface with custom UI. For e.g. you could set up this custom webhook app built by the community that allows you to trigger builds on a button click.
For Netlify, there's an integration available. Unfortunately, as of now for other build pipelines (Vercel, Travis, GitHub Action), it would need to be something custom.
For your case, I recommend having a look at a custom build trigger in the UI.
I am using Chrome Dev Tools. I use it to generate an audit report, then save it as a .json file. Then, how to reopen it in Chrome? I cannot find a way to open the .json file in Chrome.
Open-AudIT cleverly checks an association's system and stores the arrangements of the found gadgets.
An incredible revealing structure empowers data, for example, programming permitting, setup changes, non-approved gadgets, limit usage and equipment guarantee status to be removed and investigated.
Open-AudIT Enterprise accompanies extra highlights including Business Dashboards, Report sifting, Scheduled revelation, Scheduled Reports and Maps.
I've been reading about how the GemFire distributed data store/management/cache system performs notifications. While this reading, i had this question.
Gemfire seems to be using MBeans to create notifications during events. How different/suitable is using MBeans to create notifications instead of implementing a Listener based aproach ? (not just in GemFire but, generally)
Note: I am very new to the topic of MBean. Just with the understanding that it's main purpose is to expose resources to be managed.
CONTEXT
...topic of MBean... it's main purpose is to expose resources to be managed.
That is correct. (GemFire) Resources exposed as MBeans can both be queried and altered, depending on what the MBean exposes for the resource (e.g. Region, DiskStore, Gateway, AEQ, etc), using JMX.
GemFire's JMX interface can then be consumed by applications and tools that use the JMX API. GemFire's Gfsh (command-line shell and management tool) along with Pulse (web monitoring tool) are both examples of JMX clients and the kinds of applications you could write that use JMX.
You can also use the standard JDK tools like jconsole or jvisualvm to connect to a GemFire Manager (managing node in the cluster that federates the view of all the members in the cluster as well as the ability to control any single member from the Manager). See GemFire's section in the User Guide on Management for more details.
Contrasting that with GemFire Callbacks, callbacks (e.g. CacheListener) can be used by peer/client cache applications to register interests in certain types of events, like Region entry creation/updates, etc. Other callbacks like CacheLoaders can used to read-through to an external data source (e.g. RDBMS) on a Cache miss. Likewise, the CacheWriter can be used to 'write-through' to an external data source on a Cache (Region) create/update, or perhaps asynchronously with a AEQ/AsyncEventListener performing a 'write-behind' to the external data source.
There are many other callbacks and ways in which these callbacks can be used, but nearly all are used programmatically in an GemFire client/peer Cache application to "receive" notifications of some type.
For more details, see the GemFire User Guide on Events and Event Handling.
ANSWER
Now, when it comes to "sending" notifications, GemFire does a fair amount of distribution on your application's behalf. JMX is primarily used to send notifications about management changes... a Region was add, the eviction policy changed, a Function was deployed, etc. In contrast, GemFire sends distribution events when data changes to other members in the cluster that are interested in the event. "Interested" members typically includes other nodes in the cluster that host the same Region and have the same key/values, which need to be updated, and in certain cases atomically (in a TX) for consistency sakes.
Now, if you want to send notifications from your application, then you are better off using Spring and Spring Data GemFire to configure and access GemFire. Spring provides exceptional support for application messaging.
Of course, other options are available including JMS, which Spring provides integration support.
All and all, the events/notifications that are sent and the distribution mechanism used highly depends on the event/notification type. As well, the manner in which to be notified (JMX Notification vs. GemFire Callback) is also dependent on the type of message and purpose.
Sorry for the lengthy explanation; it is loaded/broad question and complex subject that can vary greatly depending on the use case.
Hope this helps (a little ;-)
Once I have deployed my application on Openshift, what is the recommended way / best practice of collecting the: 1) CPU, 2) network, 3) memory, 4) disk storage usage of the app? Basically to monitoring an app.
The best would be if they could be displayed in a time series format. Is it possible to link it with 3rd party service (e.g. New Relic) to do that?
Thanks.
I would say that new relic would be the best way to go for most folks. OpenShift does have a marketplace that brings in lots of different 3rd-party solutions like and makes them super easy to integrate. New Relic is available and best of all you can do it for free. You can go to marketplace.openshift.com to add new relic and there's even a KB that will walk you through it step by step here: https://help.openshift.com/hc/en-us/articles/203467070-How-do-I-add-New-Relic-to-my-application-in-the-OpenShift-Marketplace-.
For the sake of stackoverflow, here are the contents of that article:
1. Go to marketplace.openshift.com and login in
2. Locate New Relic
3. Click on "Try the Free Edition"
4. Complete checkout steps.
This will create your www.newrelic.com account. You can confirm this by going to
purchased products at the top of the page. Then to your new relic add-on and click on "New Relic". This should bring you over to newrelic.com and automatically log you in with your OpenShift marketplace account.
To add New Relic to an individual OpenShift application.
Click on Purchased Products
In the New Relic Section, you should have something like "newrelic_6a260 Standard" and a "add to apps" button.
Click on the "add to apps" button
Select the application you want to add New Relic to.
There are two other options you can use.
AppDynamics - I have used their tools and I really like it for monitoring. It is available as well through the Online Store
DataDog - I have not used them but I have seen the demos at their booth and it looks really good as well.
Would love to hear what you choose and your experience.
You should consider Sysdig Container Monitoring
Of all the tools mentioned, it's the only one that was purpose-built for containers. It uses the metadata from openshift to allow you to group containers dynamically into services (namespaces, deployments, etc).
It gives you host, container, and application metrics, including response time of containers and services using network data.
It provides custom alerting and dashboarding as well.
Finally, if you're the service provider, they have a functionality that enables "service-based access controls" - basically allowing you to limit data access to certain services, again, based on the Openshift's metadata.
Sysdig can be used as a cloud service or as on-premise software depending on your use case. Here is a link to their open shift commons briefing: https://www.youtube.com/watch?v=-w-OD78Hno0