I am trying to configure grafana for graphite. I used the below link to configure graphite.
Graphite is running on my laptop on port 8080. I am able to send data using a sample project that i have created and am able to see the graphs for the same on graphite UI.I am sending data to graphite over port 2003 ie the port on which carbon runs.
Graphite installation - https://gist.github.com/albertohm/5697429
I am trying to configure grafana now to display data present on graphite.I have used the link below to configure grafana. I have only made changes in the config file.When i click the index.html file i can see the grafana UI but it is not displaying data present on graphite. Can somebody please help me on this.
Grafana installation link - http://grafana.org/docs/
All the services are running on my laptop.
Below is the config file that i am using on grafana.
define(['settings'],
function (Settings) {
return new Settings({
datasources: {
graphite: {
type: 'graphite',
url: "http://127.0.0.1:8080",
default: true,
//render_method: 'GET',
}
},
/* Global configuration options
* ========================================================
*/
// specify the limit for dashboard search results
search: {
max_results: 20
},
// default home dashboard
default_route: '/dashboard/file/default.json',
//default_route: '/opt/graphite/webapp/graphite/dashboard',
// set to false to disable unsaved changes warning
unsaved_changes_warning: true,
// set the default timespan for the playlist feature
// Example: "1m", "1h"
playlist_timespan: "1m",
// If you want to specify password before saving, please specify it bellow
// The purpose of this password is not security, but to stop some users from accidentally changing dashboards
admin: {
password: ''
},
// Change window title prefix from 'Grafana - <dashboard title>'
window_title_prefix: 'Grafana - ',
// Add your own custom panels
plugins: {
// list of plugin panels
panels: [],
// requirejs modules in plugins folder that should be loaded
// for example custom datasources
dependencies: [],
}
});
});
Thanks in advance.
Related
I start using next-pwa and the basic setup worked like a charm. Now it want to play with the runtime caching option, which does not work for me:
My next.config.js includes the standard cache entries plus a custom one that should use the strategy StaleWhileRevalidate for each request going to /api/todoitem:
const withPWA = require("next-pwa");
const defaultRuntimeCaching = require("next-pwa/cache");
module.exports = withPWA({
reactStrictMode: true,
pwa: {
dest: "public",
disable: false, // disable PWA
register: true,
skipWaiting: true,
runtimeCaching: [
{
urlPattern: /\/api\/todoitem/,
method: "GET",
handler: "StaleWhileRevalidate",
options: {
cacheName: "todoApp-api",
expiration: {
maxEntries: 64,
maxAgeSeconds: 24 * 60 * 60, // 24 hours
},
},
},
...defaultRuntimeCaching,
],
},
});
Restart npm run dev fire up the browser -> fetch GET /api/todoitem -> and console.log tells me
workbox Network request for '/api/todoitem' returned a response with status '200'.
workbox Using NetworkOnly to respond to '/api/todoitem'
I tried a number of combinations of regexes, including defaultRuntimeCaching before or after my runtimeCache entry to no avail.
Any hints to get custom runtimeCache rules working would be greatly appreciated.
next.js 12.0.7
next-pwa 5.4.4
node.js v14.18.2
After some research I found:
In development mode, next-pwa creates a service worker that disables caching. It even tells me so on the console ;-):
[PWA] Build in development mode, cache and precache
are mostly disabled. This means offline support is
disabled, but you can continue developing other
functions in service worker.
When building the app via next build it creates a service worker that uses my custom rules and when starting the app next start the rules seem to work.
A bit hard to debug, so I tried to set mode: "production" on my developer machine but then for some reason the service worker gets rebuilt at every other request which brings the app to a grinding halt.
I'm trying to write a Karate UI test for my webpage which currently has a self signed certificate and hence blocked by the browser by default. According to the documentation, when acceptInsecureCerts parameter is enabled, this check should be bypassed. But I can't find the correct syntax to pass this parameter to the driver. This is my (simplified) feature file:
Feature: browser automation 1
Background:
* def session = { capabilities: { acceptInsecureCerts: true } }
* configure driver = { type: 'chrome', showDriverLog: true, showProcessLog: true, showBrowserLog: true, webDriverSession: '#(session)' }
Scenario: load demo page
Given driver 'https://127.0.0.1:8443/demo'
* waitUntil('document.readyState == "complete"')
* print 'page loaded'
* screenshot()
Then delay(2000).text('body')
When I run this, I get
13:31:25.237 [nioEventLoopGroup-2-1] DEBUG c.intuit.karate.driver.DriverOptions - << {"id":9,"result":{"result":{"type":"string","value":"Your connection is not private Attackers might be trying to steal your information from ...
Hold on, chrome is NOT webdriver based, so the webDriverSession will not apply. It would for chromedriver.
I did a quick search and the best I could find is this: ignore-certificate-errors + headless puppeteer+google cloud
So not sure if this works:
addOptions: ['--ignore-certificate-errors']
Please report what you find so that it helps others ! Another reference is this, but not sure how up to date it is: https://peter.sh/experiments/chromium-command-line-switches
The application cache is now deprecated and browsers like Chrome are removing support.
We have an application that can work 100% offline while storing data in the indexeddb and syncing later when the user is back online. We need to transition this site from using application cache to service worker. We will be using Workbox for our service worker.
There are three main sections of our cache manifest that we must covert.
CACHE Section
This is a list of assets to precache. This is probably the most straight forward to transition as we are using workbox to precache these files.
NETWORK Section
We are using * here (probably most common) so that's probably not going to be an issue.
FALLBACK Section
We have quite a few entries in the fallback section. Basically they are redirect to the login page and are there in case someone refreshes the page offline.
Example:
FALLBACK:
/search /login
/customer-edit /login
/foo-bar-baz /login
...
My question:
Is there either 1) a general guide to converting application cache/cache manifest to service workers or 2) some specific guidance for converting the FALLBACK section to the equivalent functionality in a service worker.
Google and Duck Duck Go have not been extremely helpful.
There are existing projects to upgrade the app cache to service workers but most appear very beta, example from Google Chrome Labs: github.com/GoogleChromeLabs/sw-appcache-behavior
This is the solution I came up with using Google's workbox.
Sidenote: Workbox appears to have a has a solution for most common service worker use cases and has a very flexible distribution model that makes it quite easy to work with, either in a vanilla js environment or with your framework of choice.
We ended up converting our server side AppCache (cach manifest code) to generate a service worker. (How to precaching an url list coming from a json file?)
Depending on your server side language you're code will vary, but this is the end product that worked for us:
service-worker.js (generated server side)
const productVersion = "3.01";
importScripts('/assets/workbox/workbox-sw.js');
workbox.setConfig({
modulePathPrefix: '/assets/workbox/'
});
const { precacheAndRoute, createHandlerBoundToURL } = workbox.precaching;
const { NavigationRoute, registerRoute, setCatchHandler } = workbox.routing;
precacheAndRoute([
// cache index html
{url: '/', revision: '3.01' },
// web workers
{url: '/assets/some-worker.js?ver=3.01', revision: '' },
{url: '/assets/other-worker.js?ver=3.01', revision: '' },
// other js files
{url: '/assets/shared-function.js', revision: '3.01' },
// ... removed for brevity
// css
{url: '/assets/site.css', revision: '3.01' },
{url: '/assets/fonts/fonts.css', revision: '3.01' },
// svg's
{url: '/assets/images/icon.svg', revision: '3.01' },
{url: '/assets/images/icon-2.svg', revision: '3.01' },
// png's
{url: '/assets/images/img-1.png', revision: '3.01' },
{url: '/assets/images/favicon/apple-touch-icon-114x114.png', revision: '3.01' },
// ...
// ...
// fonts
{url: '/assets/fonts/lato-bla-webfont.eot', revision: '3.01' },
{url: '/assets/fonts/lato-bla-webfont.ttf', revision: '3.01' },
// sounds
{url: '/assets/sounds/keypress.ogg', revision: '3.01' },
{url: '/assets/sounds/sale.ogg', revision: '3.01' },
]);
// Routing for SPA
// This assumes DEFAULT_URL has been precached.
const DEFAULT_URL = '/';
const handler = createHandlerBoundToURL(DEFAULT_URL);
const navigationRoute = new NavigationRoute(handler, {
denylist: [
new RegExp('/ping'),
new RegExp('/upgrade'),
new RegExp('/cache.manifest'),
],
});
registerRoute(navigationRoute);
// This allows the main window to signal the service worker that
// it should go ahead and install if it's waiting.
addEventListener('message', (event) => {
if (event.data && event.data.type === 'SKIP_WAITING') {
skipWaiting();
}
});
There's a few other things to note. We had to figure out how to smoothly upgrade from App?Cache to Service Workers. Turns out the generating an empty cache manifest for us did the trick.
We already had an upgrade process in place (prompt user to upgrade or force automatic upgrade with a count down timer) so we had to do some work to get that to work with service workers. Notice the end of the service worker file has addEventListener code. We actually call that from an upgrade page to get a smooth upgrade process. It goes something like this:
A) Upgrade script detects new version is available (many ways to do this, api call polling, etc)
B) If user accepts or timer expires redirect user to an upgrade page. This step is vital b/c you can't update a service worker with an app still running. So navigate to the upgrade page, wait for the service worker to get installed then tell it to skip waiting and the redirect to main (login) screen.
C) User is happily running new version of app.
Upgrade page code:
(this is a good page to show an "updating" UI of some type)
<script type="module">
import { Workbox } from '/assets/workbox/workbox-window.prod.mjs';
if ('serviceWorker' in navigator) {
const wb = new Workbox('/serviceworker');
// This code exists b/c a service worker can't update with just a refresh/reload in the
// browser. This is b/c on a reload, the old and new page exist simultaneously and the old MUST
// unload before the new service worker can automatically assume control. Also if multiple pages
// are open, this blocks the service worker from taking control (multiple pages should not an issue with this app).
// This code activates a waiting service worker and _then_ redirects back to the app.
// Add an event listener to detect when the registered
// service worker has installed but is waiting to activate.
wb.addEventListener('waiting', (event) => {
// Set up a listener that will reload the page as soon as the previously waiting
// service worker has taken control.
wb.addEventListener('controlling', (event) => {
window.location.replace('/login');
});
// Send a message telling the service worker to skip waiting.
// This will trigger the `controlling` event handler above.
wb.messageSW({type: 'SKIP_WAITING' });
});
wb.register();
}
// set a timeout in case the service worker has already installed.
setTimeout(function () {
window.location.replace('/login');
}, 30000);
</script>
Main page (index.html, etc)
(Handles if the user is coming to the app with a server worker ready to activate, so it needs a refresh to get the right assets/code loaded)
<script type="module">
import { Workbox } from '/assets/workbox/workbox-window.prod.mjs';
if ('serviceWorker' in navigator) {
const wb = new Workbox('/serviceworker');
wb.addEventListener('activated', (event) => {
// `event.isUpdate` will be true if another version of the service
// worker was controlling the page when this version was registered.
if (!event.isUpdate) {
// service worker was updated and activated for the first time.
// If your service worker is configured to precache assets, those
// assets should all be available now.
// this will only happen if the browser was closed when a new version was made available
// and it will only happen once per service worker install.
// Reload to so all libs are correct version.
window.location.reload(true);
}
});
wb.register();
}
</script>
I uploaded my project in IIS which was working fine in local but in windows server 2008 R2 it was showing the above attached issue after login(Please check the attached image). The above issue was coming because Internet Explorer Enhanced Security Configuration(IEESC) was on, so I make it off but still my page was not working.
Page Behavior: 1) No page error .Also no 404 and 403 error.(Even if CustomError mode is On)
2) Controls including grid view was not getting filled up from database by JSON call.
Solution: 1) Enable .json file extension simply follow this instructions. Open the properties for the server in IIS Manager and click MIME Types
Click "New". Enter "JSON" for the extension and "application/json" for the MIME type.
2) Add the following line in web.config file
3) If you deploy your application to IIS your URI must include your application name as well. So, as your application name is QCValueStream, then your URI must be http://localhost/QCValueStream/ManageProjects/GetManageProjectsData/5.
You can automatically detect your base Uri and have it prepend by adding a line in your master page(Asp.net web application) or shared _Layout.cshtml(Asp.net MVC):
<script type="text/javascript">
var config = {
contextPath: '#Url.Content("~")'
}
var baseUri = config.contextPath;
//or
var baseUri = '#Url.Content("~")';
//Then in your JS you prepend by:
//Incorrect JSON Call
$.getJSON('/ManageProjects/GetManageProjectsData?', { searchText: inputsearchText }, function (data) {
myData = data;
});
//Correct JSON Call
$.getJSON(baseUri +'/ManageProjects/GetManageProjectsData?', { searchText: inputsearchText }, function (data) {
myData = data;
});
</script>
Note: Check the below url to make off Explorer Enhanced Security Configuration for Administrator or user.
http://www.aurelp.com/2013/01/16/how-to-turn-off-internet-explorer-enhanced-security-configuration-step-by-step/
Hello i have a project that uses gulp for the build framework, and used karma with jasmine for the testing.
I am trying to integrate proxyquireify to mock the requires, i just added proxyquireify as browserify plugin in karma config, as i am using karma-browserify.
But this results in an error when running the tests, in the first line, saying 'require is undefined'.
What am i doing wrong?
here is my karma config
// Karma configuration
// Generated on Wed Nov 26 2014 17:57:28 GMT+0530 (IST)
module.exports = function(config) {
config.set({
// base path that will be used to resolve all patterns (eg. files, exclude)
basePath: '',
// frameworks to use
// available frameworks: https://npmjs.org/browse/keyword/karma-adapter
frameworks: ['browserify', 'jasmine'],
// list of files / patterns to load in the browser
files: [
'./components/renderer/**/*.spec.js',
'./components/tracker/**/*.spec.js'
],
// list of files to exclude
exclude: [
],
// preprocess matching files before serving them to the browser
// available preprocessors: https://npmjs.org/browse/keyword/karma-preprocessor
preprocessors: {
'./components/**/*.spec.js' : [ 'browserify' ]
},
browserify: {
debug:true,
plugin: ['proxyquireify/plugin']
},
// test results reporter to use
// possible values: 'dots', 'progress'
// available reporters: https://npmjs.org/browse/keyword/karma-reporter
reporters: ['spec'],
// web server port
port: 9876,
// enable / disable colors in the output (reporters and logs)
colors: true,
// level of logging
// possible values: config.LOG_DISABLE || config.LOG_ERROR || config.LOG_WARN || config.LOG_INFO || config.LOG_DEBUG
logLevel: config.LOG_INFO,
// enable / disable watching file and executing tests whenever any file changes
autoWatch: false,
// start these browsers
// available browser launchers: https://npmjs.org/browse/keyword/karma-launcher
browsers: ['Chrome'],
// Continuous Integration mode
// if true, Karma captures browsers, runs the tests and exits
singleRun: false
});
};
proxyquireify works internally by substituting the require function provided by browserify.
In this case it seems the new substituted require function was not exposed to global scope.
I went through the code and found out proxyquireify creates the new require function in node_modules/proxyquireify/lib/prelude.js named as newRequire.
the issue i was having was that the newRequire function was not exposed in the global scope as the require function, so i changed node_modules/proxyquireify/lib/prelude.js so that
// Override the current require with this new one
return newRequire;
becomes
// Override the current require with this new one
require = newRequire;
and the newRequire was properly exposed to global scope and everything worked fine. Since this change is reset every time i do a npm install, i created a gulp task in my case which does this change every time before tests are run, i will add the gulp task for reference
// Task to modify proxyquireify so that it works properly, there is a bug in the npm library
gulp.task('_patch:proxyquireify', function() {
gulp.src(['./node_modules/proxyquireify/lib/prelude.js'])
.pipe(replace(/return newRequire;/g, 'require = newRequire;'))
.pipe(gulp.dest('./node_modules/proxyquireify/lib'));
});
I run this task before executing the test tasks, like this
// Task to run tests
gulp.task('run:test', ['_patch:proxyquireify'], function() {
//logic to run tests
};
I hope this helps, thanks