Cannot Enable Feature Flags in Apache Superset - configuration

I am using apache-superset 2.0.0 and python 3.9. I have created the superset_config.py file as shown below, exported the SUPERSET_CONFIG_PATH and after initializing superset I get the following message "Loaded your LOCAL configuration at [/home/...]". However, the enabled features are still not shown when creating/editing charts. Can anybody help me out?
# Superset specific config
ROW_LIMIT = 5000
# ---------------------------------------------------
# Feature flags
# ---------------------------------------------------
# Feature flags that are set by default go here. Their values can be
# overwritten by those specified under FEATURE_FLAGS in superset_config.py
# For example, DEFAULT_FEATURE_FLAGS = { 'FOO': True, 'BAR': False } here
# and FEATURE_FLAGS = { 'BAR': True, 'BAZ': True } in superset_config.py
# will result in combined feature flags of { 'FOO': True, 'BAR': True, 'BAZ': True }
FEATURE_FLAGS = {
"DRILL_TO_DETAIL": True,
"DASHBOARD_CROSS_FILTERS": True,
"ALERTS_ATTACH_REPORTS": True,
"DASHBOARD_NATIVE_FILTERS_SET": True,
"ALERT_REPORTS": True,
"DASHBOARD_FILTERS_EXPERIMENTAL": True,
"ALLOW_ADHOC_SUBQUERY": True,
"DRUID_JOINS": True
}
Thanks in advance

Related

How to print an html file as a report after a successful build, in Jenkins?

My current Jenkins Version: Jenkins 2.204.4
I have a python program generating an HTML Report(only contains a table).
I need to print this as the build report after a successful Jenkins pipeline build.
I tried using the dashboard plugin(iframe portlet) and htmlpublisher plugin but cannot get them to print it as a build report.
Also, I want to keep only one file and not have multiple files doing multiple things. Is it possible?
This is the last stage from the pipeline
stage("publish HTML Table") {
steps {
script {
def outputhtml = sh returnStdout: true, script: 'ls -atrl ./output |tail -1|cut -d" " -f11'
println outputhtml
def htmlfolder = "output/".concat(outputhtml)
publishHTML([allowMissing: false, alwaysLinkToLastBuild: true, escapeUnderscores: false, keepAll: false, reportDir: htmlfolder, reportFiles: 'final_result.html', reportName: 'Vulnerability Test Report', reportTitles:''])
createSummary(icon:"star-gold.png",text: "${outputhtml}")
}
}
}
edit:
createSummary(icon: "notepad.png", text: readFile('./'.concat(html_folder.trim().concat("/${final_html}".trim()))))
This works. There was a dependency issue. https://plugins.jenkins.io/badge/ is the plugin that we need.

Scrapy multiple regular expressions in LinkExtractor seem to be not working

I've got my regular expressions inside a JSON file. This file gets loaded as a configuration for my spider. The spider creates one LinkExtractor with allow and deny regular expression rules.
I'd like to:
crawl and scrape product pages (scraping / parsing is NOT working)
crawl category pages
avoid general pages (about us, privacy, etc.)
It all works well on some shops, but not on others and I believe it's a problem of my Regular Expressions.
"rules": [
{
"deny": ["\\/(customer\\+service|ways\\+to\\+save|sponsorship|order|cart|company|specials|checkout|integration|blog|brand|account|sitemap|prefn1=)\\/"],
"follow": false
},
{
"allow": ["com\\/store\\/details\\/"],
"follow": true,
"use_content": true
},
{
"allow": ["com\\/store\\/browse\\/"],
"follow": true
}
],
URL patterns:
Products:
https://www.example.com/store/details/Nike+SB-Portmore-II-Solar-Canvas-Mens
https://www.example.com/store/details/Coleman+Renegade-Mens-Hiking
https://www.example.com/store/details/Mueller+ATF3-Ankle-Brace
https://www.example.com/store/details/Planet%20Fitness+18
https://www.example.com/store/details/Lifeline+Pro-Grip-Ring
https://www.example.com/store/details/Nike+Phantom-Vision
Categories:
https://www.example.com/store/browse/footwear/
https://www.example.com/store/browse/apparel/
https://www.example.com/store/browse/fitness/
Deny:
https://www.example.com/store/customer+service/Online+Customer+Service
https://www.example.com/store/checkout/
https://www.example.com/store/ways+to+save/
https://www.example.com/store/specials
https://www.example.com/store/company/Privacy+Policy
https://www.example.com/store/company/Terms+of+Service
Loading the rules from JSON inside my spider __init__
for rule in self.MY_SETTINGS["rules"]:
allow_r = ()
if "allow" in rule.keys():
allow_r = [a for a in rule["allow"]]
deny_r = ()
if "deny" in rule.keys():
deny_r = [d for d in rule["deny"]]
restrict_xpaths_r = ()
if "restrict_xpaths" in rule.keys():
restrict_xpaths_r = [rx for rx in rule["restrict_xpaths"]]
Sportygenspider.rules.append(Rule(
LinkExtractor(
allow=allow_r,
deny=deny_r,
restrict_xpaths=restrict_xpaths_r,
),
follow=rule["follow"],
callback='parse_item' if ("use_content" in rule.keys()) else None
))
If I do a pprint(vars(onerule.link_extractor)) I can see the Python regex correctly:
'deny_res': [re.compile('\\/(customer\\+service|sponsorship|order|cart|company|specials|checkout|integration|blog|account|sitemap|prefn1=)\\/')]
{'allow_domains': set(),
'allow_res': [re.compile('com\\/store\\/details\\/')],
{'allow_domains': set(),
'allow_res': [re.compile('com\\/store\\/browse\\/')],
Testing the regex in https://regex101.com/ seems to be fine as well (despite: I'm using \\/ in my JSON file and \/ in regex101.com)
In my spider logfile, I can see that the produce pages are being crawled, but not parsed:
2019-02-01 08:25:33 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.example.com/store/details/FILA+Hometown-Mens-Lifestyle-Shoes/5345120230028/_/A-6323521;> (referer: https://www.example.com/store/browse/footwear)
2019-02-01 08:25:47 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.example.com/store/details/FILA+D-Formation-Mens-Lifestyle-Shoes/5345120230027/_/A-6323323> (ref
Why does the spider not parse the product pages?
(same code, different JSON works on different shops)
After hours of debugging and testing, I figured that I had to change the order of the rules.
Products to scrape rule
Deny about us etc.
Categories to follow
Now it is working.
"rules": [
{
"allow": ["com\\/store\\/details\\/"],
"follow": true,
"use_content": true
},
{
"deny": ["\\/(customer\\+service|ways\\+to\\+save|sponsorship|order|cart|company|specials|checkout|integration|blog|brand|account|sitemap|prefn1=)\\/"],
"follow": false
},
{
"allow": ["com\\/store\\/browse\\/"],
"follow": true
}
],

Check_MK - Custom check params specified in wato not being given to check function

I am working on a check_mk plugin and can't seem to get the WATO specified params passed to the check function when it runs for one check in particular...
The check param rule shows in WATO
It writes correct looking values to rules.mk
Clicking the Analyze check parameters icon from a hosts service discovery shows the rule as active.
The check parameters displayed in service discovery show the title from the WATO file so it seems like it is associating things correctly.
Running cmk -D <hostname> shows the check as always having the default values though.
I have been staring at it for awhile and am out of ideas.
Check_MK version: 1.2.8p21 Raw
Bulk of check file:
factory_settings["elasticsearch_status_default"] = {
"min": (600, 300)
}
def inventory_elasticsearch_status(info):
for line in info:
yield restore_whitespace(line[0]), {}
def check_elasticsearch_status(item, params, info):
for line in info:
name = restore_whitespace(line[0])
message = restore_whitespace(line[2])
if name == item:
return get_status_state(params["min"], name, line[1], message, line[3])
check_info['elasticsearch_status'] = {
"inventory_function" : inventory_elasticsearch_status,
"check_function" : check_elasticsearch_status,
"service_description" : "ElasticSearch Status %s",
"default_levels_variable" : "elasticsearch_status_default",
"group" : "elasticsearch_status",
"has_perfdata" : False
}
Wato File:
group = "checkparams"
#subgroup_applications = _("Applications, Processes & Services")
register_check_parameters(
subgroup_applications,
"elasticsearch_status",
_("Elastic Search Status"),
Dictionary(
elements = [
( "min",
Tuple(
title = _("Minimum required status age"),
elements = [
Age(title = _("Warning if below"), default_value = 600),
Age(title = _("Critical if below"), default_value = 300),
]
))
]
),
None,
match_type = "dict",
)
Entry in rules.mk from WATO rule:
checkgroup_parameters.setdefault('elasticsearch_status', [])
checkgroup_parameters['elasticsearch_status'] = [
( {'min': (3600, 1800)}, [], ALL_HOSTS ),
] + checkgroup_parameters['elasticsearch_status']
Let me know if any other information would be helpful!
EDIT: pls help
Posted question here as well and the mystery got solved.
I was matching the WATO rule to item None (5th positional arg in the WATO file), but since this check had multiple items inventoried under it (none of which had the id None) the rule was applying to the host, but not to any of the specific service checks.
Fix was to replace that param with:
TextAscii( title = _("Status Description"), allow_empty = True),

Creating Html table using perl

I have a hash in perl whose keys are domain names and value is reference to array of blacklisted zones in which the domain is blacklisted.Currently I am checking the domain against 4 zones.If the domain is blacklisted in the particular zone the I push the zone names in the array.
domain1=>(zone1,zone2)
domain2=>(zone1)
domain3=>(zone3,zone4)
domain4=>(zone1,zone2,zone3,zone4)
I want to create a HTML table from these values in CGI like
domain-names zone1 zone2 zone3 zone4
domain1 true true false false
domain2 true false false false
domain3 false false true true
domain4 true true true true
I tried it using map in CGI like
print $q->tbody($q->Tr([
$q->td([
map {
map{
$_
}'$_',#{$result{$_}}
}keys %result
])
)
I am unable to the desired output.I am not sure of using if-else in map.
If I manually generate the td's Then I need to write a separate td's for each condition like
If(zone1&&zone2&&!zone3&&!zone4){
print "<td>true</td><td>true</td><td><false/td><td>false</td>";
}
......
It is very tedious.How can I get that output?
Convert your Hash of Arrays to a Hash of Hashes. This makes it easier to test for existence of a particular zone.
The following demonstrates and then displays the data in a simple text table:
use strict;
use warnings;
# Your Hash of Arrays
my %HoA = (
domain1 => [qw(zone1 zone2)],
domain2 => [qw(zone1)],
domain3 => [qw(zone3 zone4)],
domain4 => [qw(zone1 zone2 zone3 zone4)],
);
# Convert to a Hash of hashes - for easier testing of existance
my %HoH;
$HoH{$_} = { map { $_ => 1 } #{ $HoA{$_} } } for keys %HoA;
# Format and Zone List
my $fmt = "%-15s %-8s %-8s %-8s %-8s\n";
my #zones = qw(zone1 zone2 zone3 zone4);
printf $fmt, 'domain-names', #zones; # Header
for my $domain ( sort keys %HoH ) {
printf $fmt, $domain, map { $HoH{$domain}{$_} ? 'true' : 'false' } #zones;
}
Outputs:
domain-names zone1 zone2 zone3 zone4
domain1 true true false false
domain2 true false false false
domain3 false false true true
domain4 true true true true

jqGrid - Pagination not working properly

As you can see in this image
I have 13 records on my DB but the pager says it has only 1 page (with 10 rows), which is not correct.
Relevant part of the code from my .js
function cria(){
$("#grid").jqGrid({
datatype: 'json',
url: 'json.jsp',
jsonReader: {repeatitems: false},
pager: '#paginado',
rowNum: 10,
rowList: [10,20,30],
emptyrecords: "Não há registros.",
recordtext: "Registros {0} - {1} de {2}",
pgtext: "Página {0} de {1}",
colNames:['Código','Descrição'],
colModel:[
{name:'codigo', width:80, sorttype:"int", sortable: true, editable: false},
{name:'descricao', width:120, sortable: true, editable: true, editrules:{required:true}}
],
viewrecords: true,
editurl:"dadosGrid.jsp?edit=true",
caption: "Grupos",
hiddengrid: true
});
$("#grid").jqGrid('navGrid','#paginado',{},
{edit:true,url:"Adm?aux=edit",closeAfterEdit:true,reloadAfterSubmit:true},
{add:true,url:"Adm?aux=add",closeAfterAdd:true,reloadAfterSubmit:true},
{del:false},
{search:true},
{refresh:true});
};
Relevant part of the code from my .jsp
String json = "[";
for (UserAux user : users ){
json += "{";
json += "\"codigo\":\""+user.getCod()+"\",";
json += "\"descricao\":\""+user.getDescricao()+"\",";
json += "},";
}
json = json.substring(0,json.length()-1);
json += "]";
out.println(json);
%>
Default options of jqGrid means that you implements server side paging. If you want to returns all data at once from the server (which would be good choice if you have 13 records) you should just add loadonce: true option.
Additionally I would recommend you to add gridview: true, autoencode: true and height: "auto" option to your jqGrid. Moreover you should remove edit:true, del:false, search:true and refresh:true which you use inside of options navGrid because you use there on the wrong place. If you want to specify the options you should specify properties of the second parameter (which is {} in your code).