Browser only respects first domain in certificate - google-chrome

I am generating a self-signed certificate which I have added to my keychain and I can see that it is used by Firefox and Chrome.
But when I visit https://world.localhost, the bowser says the certificate is invalid because it is issued for localhost. All the domains below are integrated into the certificate. When I change their order, I can see that the browser only respects the top most entry (DNS.1) when compared to the requested domain, but all the domains are in the certificate when I view it (through the browser).
What is wrong in this case?
[ req ]
default_bits = 2048
default_keyfile = dev.cert.key
distinguished_name = subject
#req_extensions = req_ext
req_extensions = v3_req
x509_extensions = x509_ext
string_mask = utf8only
[ x509_ext ]
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid,issuer
basicConstraints = CA:FALSE
keyUsage = digitalSignature, keyEncipherment
subjectAltName = #alternate_names
[ v3_req ]
subjectKeyIdentifier = hash
basicConstraints = CA:FALSE
keyUsage = digitalSignature, keyEncipherment
subjectAltName = #alternate_names
[ alternate_names ]
DNS.1 = localhost
DNS.2 = *.localhost
DNS.3 = *.test.localhost
DNS.4 = *.www.localhost
DNS.5 = *.api.localhost

The reason this is not working as expected is, the browser classifies the most right part as the Top-Level domain.
Wildcard certificates like *.com are not valid, and so is *.localhost.
When I put another label into, the whole thing works.
*.domain.localhost matches www.domain.localhost and is valid.
*.localhost matches www.localhost but is not valid.

Related

Sent a message to group from ejabberd server

Sent a message to group from ejabberd server but i get
Hook user_receive_packet crashed when running
mod_mam:user_receive_packet
send_message(Type, From, To, Subject, Body, StaticNumber) ->
CodecOpts = ejabberd_config:codec_options(),
try xmpp:decode(
#xmlel{name = <<"message">>,
attrs = [{<<"to">>, To },
{<<"from">>,From},
{<<"type">>, Type},
{<<"id">>, p1_rand:get_string()}],
children =
[#xmlel{name = <<"subject">>,
children = [{xmlcdata, Subject}]},
#xmlel{name = <<"groupcontent">>,
attrs = [{<<"sendername">>, <<"Admin">>},
{<<"acknowStatus">>, <<"0">>},{<<"fromadmin">>, StaticNumber}],
children = []},
#xmlel{name = <<"body">>,
children = [{xmlcdata, Body}]}]},
?NS_CLIENT, CodecOpts) of
#message{from = JID} = Msg ->
State = #{jid => JID},
ejabberd_hooks:run_fold(user_send_packet, JID#jid.lserver, {Msg, State}, []),
ejabberd_router:route(Msg)
catch _:{xmpp_codec, Why} ->
{error, xmpp:format_error(Why)}
end.
function call :
send_message("normal",
list_to_binary("123456789#xmpp.designcafe.com"),
list_to_binary("6ff3d0a4-c281-41bd-a262-c65bd767014d#mix.xmpp.designcafe.com"),
list_to_binary("text"), <<"test">>, <<"123456789">>);
I could not fix above issue
send_message("normal",
Instead of "normal", you must provide groupchat as a binary, that is:
send_message(<<"groupchat">>,
What that change it works for me using ejabberd 22.05. It's important that From is an existing account, and it joined the MIX Channel. Of course the MIX Channel must exist too.

Can't map Samba share from CentOS to Win10 - error 67 "The network name cannot be found"

I've just followed a guide on installing Samba, adding a samba user and configuring the smb.conf file
[global]
workgroup = SAMBA
security = user
passdb backend = tdbsam
printing = cups
printcap name = cups
load printers = yes
cups options = raw
[homes]
comment = Home Directories
valid users = %S, %D%w%S
browseable = No
read only = No
inherit acls = Yes
[printers]
comment = All Printers
path = /var/tmp
printable = Yes
create mask = 0600
browseable = No
[print$]
comment = Printer Drivers
path = /var/lib/samba/drivers
write list = #printadmin root
force group = #printadmin
create mask = 0664
directory mask = 0775
[Stuff]
path = /mystuff
guest ok = no
available = yes
valid users = livingroom
read only = no
browsable = yes
writeable = yes
On my Win10 machine I can \192.168.100.6 and get prompted for login (username livingroom) which it accepts. I then see two folders in explorer - 'livingroom' and 'Stuff'
However when I double-click either of them it will try for a while before eventually failing with Error code: 0x80070043 - The network name cannot be found.
Any ideas why its saying the network name cannot be found when i'm using the IP address to access it?

Check_MK - Custom check params specified in wato not being given to check function

I am working on a check_mk plugin and can't seem to get the WATO specified params passed to the check function when it runs for one check in particular...
The check param rule shows in WATO
It writes correct looking values to rules.mk
Clicking the Analyze check parameters icon from a hosts service discovery shows the rule as active.
The check parameters displayed in service discovery show the title from the WATO file so it seems like it is associating things correctly.
Running cmk -D <hostname> shows the check as always having the default values though.
I have been staring at it for awhile and am out of ideas.
Check_MK version: 1.2.8p21 Raw
Bulk of check file:
factory_settings["elasticsearch_status_default"] = {
"min": (600, 300)
}
def inventory_elasticsearch_status(info):
for line in info:
yield restore_whitespace(line[0]), {}
def check_elasticsearch_status(item, params, info):
for line in info:
name = restore_whitespace(line[0])
message = restore_whitespace(line[2])
if name == item:
return get_status_state(params["min"], name, line[1], message, line[3])
check_info['elasticsearch_status'] = {
"inventory_function" : inventory_elasticsearch_status,
"check_function" : check_elasticsearch_status,
"service_description" : "ElasticSearch Status %s",
"default_levels_variable" : "elasticsearch_status_default",
"group" : "elasticsearch_status",
"has_perfdata" : False
}
Wato File:
group = "checkparams"
#subgroup_applications = _("Applications, Processes & Services")
register_check_parameters(
subgroup_applications,
"elasticsearch_status",
_("Elastic Search Status"),
Dictionary(
elements = [
( "min",
Tuple(
title = _("Minimum required status age"),
elements = [
Age(title = _("Warning if below"), default_value = 600),
Age(title = _("Critical if below"), default_value = 300),
]
))
]
),
None,
match_type = "dict",
)
Entry in rules.mk from WATO rule:
checkgroup_parameters.setdefault('elasticsearch_status', [])
checkgroup_parameters['elasticsearch_status'] = [
( {'min': (3600, 1800)}, [], ALL_HOSTS ),
] + checkgroup_parameters['elasticsearch_status']
Let me know if any other information would be helpful!
EDIT: pls help
Posted question here as well and the mystery got solved.
I was matching the WATO rule to item None (5th positional arg in the WATO file), but since this check had multiple items inventoried under it (none of which had the id None) the rule was applying to the host, but not to any of the specific service checks.
Fix was to replace that param with:
TextAscii( title = _("Status Description"), allow_empty = True),

Gitlab set up mailjet in gitlab.rb

I'm trying since a few hours to set up a mail server in Gitlab(omnibus) using mailjet.
In the mailjet smtp settings I've got some credentials:
-Username(API Key)
-Password(Secret Key)
-SMTP Server ....mailjet.com
-Port: 25 or 587 (some providers block port 25)
-Use TLS : optional
The Configs in gitlab.rb look like this:
################################
# GitLab email server settings #
################################
gitlab_rails['smtp_enable'] = true
gitlab_rails['smtp_address'] = "....mailjet.com"
gitlab_rails['smtp_port'] = 587
gitlab_rails['smtp_user_name'] = "(Username(API Key))"
gitlab_rails['smtp_password'] = "(Password(Secret Key))"
gitlab_rails['smtp_domain'] = "my websites domain"
gitlab_rails['smtp_authentication'] = "login"
gitlab_rails['smtp_enable_starttls_auto'] = true
gitlab_rails['smtp_tls'] = true
This doesn't work.
Is it correct to use the websites domain as 'smtp_domain' or should I use ...mailjet.com?
Does somebody knows how to set this up?
Here are some examples how to do the setup but no informations about mailjet.
Help would be greatly appreciated.
I got help from mailjet support and it works now.
Use port 80 and disable tls and ssl.
gitlab_rails['smtp_enable'] = true
gitlab_rails['smtp_address'] = "....mailjet.com"
gitlab_rails['smtp_port'] = 80
gitlab_rails['smtp_user_name'] = "(Username(API Key))"
gitlab_rails['smtp_password'] = "(Password(Secret Key))"
gitlab_rails['smtp_domain'] = "....mailjet.com"
gitlab_rails['smtp_authentication'] = "login"

Vista UAC Issues with samba and Admin Credentials

We have Samba setup for our shared drive. I have pasted the smb.conf file below. Everything is working well accept when we try and run an EXE file using Windows Vista. When we run an EXE file it first ask for UAC control then it pops up the username and password prompt. You must then type your username and password in again before it will run.
I think the issues is that UAC is now running the application under Admin instead of the logged in user. So the first username and password that is cached is not seen by the admin user. Does anyone know of a work around for this?
smb.conf:
[global]
passdb backend = tdbsam
security = user
encrypt passwords = yes
preferred master = Yes
workgroup = Workgroup
netbios name = Omni
bind interfaces only = True
interfaces = lo eth2
;max disk size = 990000 ;some programs (like PS7) can't deal with more than 1TB
socket options = TCP_NODELAY
server string = Omni
;smb ports = 139
debuglevel = 1
syslog = 0
log level = 2
log file = /var/log/samba/%U.log
max log size = 61440
vfs objects = omnidrive recycle
recycle:repository = RecycleBin/%U
recycle:keeptree = Yes
recycle:touch = No
recycle:versions = Yes
recycle:maxsize = 0
recycle:exclude = *.temp *.mp3 *.cat
omnidrive:log = 2
omnidrive:com_log = 1
omnidrive:vscan = 1
omnidrive:versioningState = 1
omnidrive:versioningMaxFileSize = 0
omnidrive:versioningMaxRevSize = 7168
omnidrive:versioningMaxRevNum = 1000
omnidrive:versioningMinRevNum = 0
omnidrive:versioningfilesInclude = /*.doc/*.docx/*.xls/*.xlsx/*.txt/*.bmp/
omnidrive:versioningfilesExclude = /*.tmp/*.temp/*.exe/*.com/*.jarr/*.bat/.*/
full_audit:failure = none
full_audit:success = mkdir rename unlink rmdir write open close
full_audit:prefix = %u|%I|%m|%S
full_audit:priority = NOTICE
full_audit:facility = LOCAL6
;dont descend = RecycleBin
veto files = /.subversion/*.do/*.do/*.bar/*.cat/
client ntlmv2 auth = yes
[netlogon]
path = /var/lib/samba/netlogon
read only = yes
[homes]
read only = yes
browseable = no
[share1]
path = /share1
read only = no
browseable = yes
writable = yes
admin users = clinton1
public = no
create mask = 0770
directory mask = 0770
nt acl support = no
;acl map full control = no
hide unreadable = yes
store dos attributes = yes
map archive = no
map readonly = Permissions
If anyone cares; this is how I fixed the issues on vista:
I set a key to link the UAC account and the none UAC account.
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System
EnableLinkedConnections =(dword)1
The password prompt goes away.
I think that you can also address this by turning off UAC in Vista or Windows 7. Here's a link for doing that: Turn User Account Control on or off