I am unsuccessfully trying to get the shiro.ini in Zeppelin to use cas.
I followed these instructions
http://shiro.apache.org/cas.html
casFilter = org.apache.shiro.cas.CasFilter
casFilter.failureUrl = /error.html
casRealm = org.apache.shiro.cas.CasRealm
casRealm.defaultRoles = USER
casRealm.casServerUrlPrefix = https://ticketserver.com
casRealm.casService = https://tickettranslater.com/j_spring_cas_security_check
casSubjectFactory = org.apache.shiro.cas.CasSubjectFactory
sessionManager = org.apache.shiro.web.session.mgt.DefaultWebSessionManager
securityManager.subjectFactory = $casSubjectFactory
securityManager.realms = $casRealm
### If caching of user is required then uncomment below lines
#cacheManager = org.apache.shiro.cache.MemoryConstrainedCacheManager
#securityManager.cacheManager = $cacheManager
securityManager.sessionManager = $sessionManager
# 86,400,000 milliseconds = 24 hour
#securityManager.sessionManager.globalSessionTimeout = 86400000
shiro.loginUrl = /api/login
[urls]
# anon means the access is anonymous.
# authcBasic means Basic Auth Security
# authc means Form based Auth Security
# To enfore security, comment the line below and uncomment the next one
/api/interpreter/** = authc, roles[USER]
/api/configurations/** = authc, roles[USER]
/api/credential/** = authc, roles[SOMEOTHER]
/api/login = casFilter
/** = authc
#/** = anon
#/** = authc
the casService is what should translate the ticket to a user.
the casServerUrlPrefix is where once gets the tickes.
If I put for shiro.loginUrl = https://ticketserver.com?service=https://tickettranslater.com/j_spring_cas_security_check
It works except for the fact that the Origin header gets los along the way and the login fails.
both tickeserver.com and tickertranslater are in the network and they work for plenty of other applications.
How do I set up the shiro.ini so the cas login chain is correctly handled?
This configuration works with Apache Zeppelin 0.6.2.
If you are already authenticated against a CAS server you will be authenticated automatically into Apache Zeppelin.
You need to compile zeppelin-web, but first is needed to add the shiro-cas Maven dependency to zeppelin-web/pom.xml:
<dependencies>
<dependency>
<groupId>org.apache.shiro</groupId>
<artifactId>shiro-cas</artifactId>
<version>1.2.3</version>
</dependency>
</dependencies>
Then configure the file conf/shiro.ini with this:
[main]
casFilter = org.apache.shiro.cas.CasFilter
casFilter.failureUrl = /404.html
casRealm = org.apache.shiro.cas.CasRealm
casRealm.defaultRoles = ROLE_USER
casRealm.casServerUrlPrefix = http://<cas-server>:<port>/cas/p3
casRealm.casService = http://localhost:8080/api/shiro-cas
casSubjectFactory = org.apache.shiro.cas.CasSubjectFactory
securityManager.subjectFactory = $casSubjectFactory
securityManager.realms = $casRealm
sessionManager = org.apache.shiro.web.session.mgt.DefaultWebSessionManager
securityManager.sessionManager = $sessionManager
securityManager.sessionManager.globalSessionTimeout = 86400000
[urls]
/api/shiro-cas = casFilter
/api/version = anon
/** = authc
Related
Such a problem, there is a server (cluster) on which smb is used, the server is entered into the AD domain, sometimes it is necessary to restart the smbd service (reload won't fit), but at the same time there is some copying of the file on the client (windows), then the load is interrupted, and after the klick "Retry" button, the download starts from the very beginning. Is it possible to do something like that so that the load continues to go from the moment where it was interrupted, maybe you need to configure the client like that. client connects as SMBv3 or SMBv2
server on ubuntu 18.04.
smb created at zfs
smb.conf:
[global]
workgroup = TEST247
realm = test247.ru
security = ads
auth methods = winbind
interfaces = 172.16.11.170/24
bind interfaces only = yes
netbios name = SERVER
encrypt passwords = true
map to guest = Bad User
max log size = 300
dns proxy = no
socket options = TCP_NODELAY
domain master = no
local master = no
preferred master = no
os level = 0
domain logons = no
load printers = no
show add printer wizard = no
log level = 0 vfs:2
max log size = 0
syslog = 0
printcap name = /dev/null
disable spoolss = yes
name resolve order = lmhosts wins host bcast
machine password timeout = 604800
name cache timeout = 660
idmap config TEST247 : backend = rid
idmap config TEST247 : base_rid = 0
idmap config TEST247 : range = 100000 - 200000
idmap config * : range = 200001-300000
idmap config * : backend = tdb
idmap cache time = 604800
idmap negative cache time = 60
winbind rpc only = yes
winbind cache time = 120
winbind enum groups = yes
winbind enum users = yes
winbind max domain connections = 10
winbind use default domain = yes
winbind refresh tickets = yes
winbind reconnect delay = 15
winbind request timeout = 25
winbind separator = ^
private dir = /var/lib/samba/private
lock directory = /run/samba
state directory = /var/lib/samba
cache directory = /var/cache/samba
pid directory = /run/samba
log file = /var/log/samba/smb.%m
include = /etc/samba/smb-res.conf
testparm:
testparm -s /etc/samba/smb.conf
Load smb config files from /etc/samba/smb.conf
WARNING: The "auth methods" option is deprecated
WARNING: The "syslog" option is deprecated
Loaded services file OK.
Server role: ROLE_DOMAIN_MEMBER
smb-res.conf:
[test109_smb]
comment = test109_smb share
path = /config/pool/test109/smb
browseable = yes
writable = yes
inherit acls = yes
inherit owner = no
inherit permissions = yes
map acl inherit = yes
nt acl support = yes
create mask = 0777
force create mode = 0777
force directory mode = 0777
store dos attributes = yes
public = no
admin users =
valid users =
write list =
read list =
invalid users =
vfs objects = acl_xattr
full_audit:prefix = %S|%u|%I
full_audit:facility = local5
full_audit:priority = notice
full_audit:success = none
full_audit:failure = none
shadow: snapdir = .zfs/snapshot
shadow: sort = desc
shadow: localtime = yes
shadow: format = shadow_%d.%m.%Y-%H:%M:%S
worm: grace_period = 30
cryptfile: method = grasshopper
Resuming a copy operation doesn't depend on the smb client or server, but on the application which is doing the copying.
The standard Windows copy doesn't know to resume.
Other (third party) apps (maybe Total Commander?) can be more intelligent about it. You could even write your own app to do a smart copy.
My producer isnt throwing any errors but data is not being sent to the destination topic. Can you recommend any techniques to debug this situation.
I have call to a Confluent Python Avro Producer inside a synchronous loop to send data to a topic like so:
self.producer.produce(topic=test2, value=msg_dict)
After this call I have a piece of code like so to flush the queue:
num_messages_in_queue = self.producer.flush(timeout = 2.0)
print(f"flushed {num_messages_in_queue} messages from producer queue in iteration {num_iterations} ")
this executes without any error. But also there is no callback fired after this code executes. My producer is initiated as follows:
def __init__(self,broker_url=None,topic=None,schema_registry_url=None,schema_path=None):
try:
with open(schema_path, 'r') as content_file:
schema = avro.loads(content_file.read())
except Exception as e:
print(f"Error when trying to read avro schema file : {schema_path}")
self.conf = {
'bootstrap.servers': broker_url,
'on_delivery': self.delivery_report,
'schema.registry.url': schema_registry_url,
'acks': -1, #This guarantees that the record will not be lost as long as at least one in-sync replica remains alive.
'enable.idempotence': False, #
"error_cb":self.error_cb
}
self.topic = topic
self.schema_path = schema_path
self.producer = AvroProducer(self.conf,default_key_schema=schema, default_value_schema=schema)
My callback method is as follows:
def delivery_report(self, err, msg):
print(f"began delivery_report")
if err is None:
print(f"delivery_report --> Delivered msg.value = {msg.value()} to topic= {msg.topic()} offset = {msg.offset} without err.")
else:
print(f"conf_worker AvroProducer failed to deliver message {msg.value()} to topic {self.topic}. got error= {err}")
After this code is executed, I look at my topic on the schema registry container like so:
docker exec schema_registry_container kafka-avro-console-consumer --bootstrap-server kafka:29092 --topic test2 --from-beginning
I see this output:
[2020-04-03 15:48:38,064] INFO Registered kafka:type=kafka.Log4jController MBean
(kafka.utils.Log4jControllerRegistration$)
[2020-04-03 15:48:38,742]
INFO ConsumerConfig values:
auto.commit.interval.ms = 5000
auto.offset.reset = earliest
bootstrap.servers = [kafka:29092]
check.crcs = true
client.dns.lookup = default
client.id =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = false
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = console-consumer-49056
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
isolation.level = read_uncommitted
key.deserializer = class >> org.apache.kafka.common.serialization.ByteArrayDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
session.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class >>org.apache.kafka.common.serialization.ByteArrayDeserializer
(org.apache.kafka.clients.consumer.ConsumerConfig)
[2020-04-03 15:48:38,887] INFO Kafka version : 2.1.0-cp1 (org.apache.kafka.common.utils.AppInfoParser)
[2020-04-03 15:48:38,887] INFO Kafka commitId : bda8715f42a1a3db (org.apache.kafka.common.utils.AppInfoParser)
[2020-04-03 15:48:39,221] INFO Cluster ID: KHKziPBvRKiozobbwvP1Fw (org.apache.kafka.clients.Metadata)
[2020-04-03 15:48:39,224] INFO [Consumer clientId=consumer-1, groupId=console-consumer-49056] Discovered group coordinator kafka:29092 (id: 2147483646 rack: null) (org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
[2020-04-03 15:48:39,231] INFO [Consumer clientId=consumer-1, groupId=console-consumer-49056] Revoking previously assigned partitions []
(org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
[2020-04-03 15:48:39,231] INFO [Consumer clientId=consumer-1, groupId=console-consumer-49056] (Re-)joining group >(org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
[2020-04-03 15:48:42,264] INFO [Consumer clientId=consumer-1, groupId=console-consumer-49056] Successfully joined group with generation 1
(org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
[2020-04-03 15:48:42,267] INFO [Consumer clientId=consumer-1, groupId=console-consumer-49056] Setting newly assigned partitions [test2-0] >(org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
[2020-04-03 15:48:42,293] INFO [Consumer clientId=consumer-1, groupId=console-consumer-49056] Resetting offset for partition test2-0 to offset 0. >(org.apache.kafka.clients.consumer.internals.Fetcher)
So the answer is so trivial that its embarassing!
But it does point to the fact that in a multilayered infrastructure, a single value incorrectly set, can result in a silent failure which can be very tedious to track down.
So the issue came from incorrect param setting my in my docker-compose.yml file, where the env variable for broker_url was not set.
The application code needed this variable to reference the kafka broker.
However there was no exception thrown for this missing param and it was silently failing.
I have updated the logging.properties file of tomcat to print the logs in json format.
But, the issue is value of "message " has some escape characters, which makes my log invalid json.
Please let me know how to escape these characters(:,[,],/) in json using default tomcat-juli.jar and used as a string.
Below is my updated logging.properties file:
handlers = 1catalina.org.apache.juli.AsyncFileHandler, 2localhost.org.apache.juli.AsyncFileHandler, 3manager.org.apache.juli.AsyncFileHandler, 4host-manager.org.apache.juli.AsyncFileHandler, java.util.logging.ConsoleHandler
.handlers = 1catalina.org.apache.juli.AsyncFileHandler, java.util.logging.ConsoleHandler
############################################################
# Handler specific properties.
# Describes specific configuration info for Handlers.
############################################################
java.util.logging.SimpleFormatter.format = {"date" :"%1$tF", "timestamp": "%1$tT" , "loggerlevel": "%4$s", "loggersource": "%3$s" , "message": "%5$s%6$s"}%n
1catalina.org.apache.juli.AsyncFileHandler.level = FINE
1catalina.org.apache.juli.AsyncFileHandler.directory = ${catalina.base}/logs
1catalina.org.apache.juli.AsyncFileHandler.prefix = catalina.
1catalina.org.apache.juli.AsyncFileHandler.formatter = java.util.logging.SimpleFormatter
2localhost.org.apache.juli.AsyncFileHandler.level = FINE
2localhost.org.apache.juli.AsyncFileHandler.directory = ${catalina.base}/logs
2localhost.org.apache.juli.AsyncFileHandler.prefix = localhost.
2localhost.org.apache.juli.AsyncFileHandler.formatter = java.util.logging.SimpleFormatter
3manager.org.apache.juli.AsyncFileHandler.level = FINE
3manager.org.apache.juli.AsyncFileHandler.directory = ${catalina.base}/logs
3manager.org.apache.juli.AsyncFileHandler.prefix = manager.
3manager.org.apache.juli.AsyncFileHandler.formatter = java.util.logging.SimpleFormatter
4host-manager.org.apache.juli.AsyncFileHandler.level = FINE
4host-manager.org.apache.juli.AsyncFileHandler.directory = ${catalina.base}/logs
4host-manager.org.apache.juli.AsyncFileHandler.prefix = host-manager.
4host-manager.org.apache.juli.AsyncFileHandler.formatter = java.util.logging.SimpleFormatter
java.util.logging.ConsoleHandler.level = FINE
java.util.logging.ConsoleHandler.formatter = java.util.logging.SimpleFormatter
############################################################
# Facility specific properties.
# Provides extra control for each logger.
############################################################
org.apache.catalina.core.ContainerBase.[Catalina].[localhost].level = INFO
org.apache.catalina.core.ContainerBase.[Catalina].[localhost].handlers = 2localhost.org.apache.juli.AsyncFileHandler
org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/manager].level = INFO
org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/manager].handlers = 3manager.org.apache.juli.AsyncFileHandler
org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/host-manager].level = INFO
org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/host-manager].handlers = 4host-manager.org.apache.juli.AsyncFileHandler
I would suggest you official way (without using middleware):
Download Elastic JUL formatter and logging core libraries:
https://mvnrepository.com/artifact/co.elastic.logging/jul-ecs-formatter
https://mvnrepository.com/artifact/co.elastic.logging/ecs-logging-core
Put them into tomcat/bin folder
List them in CLASSPATH system property in tomcat/bin/setenv.sh(bat).
If no such file - create it with the next content: on Unix -
export CLASSPATH="$CLASSPATH:ecs-logging-core.jar:jul-ecs-formatter.jar"
on Windows - set CLASSPATH=%CLASSPATH%;ecs-logging-core.jar;jul-ecs-formatter.jar
Edit tomcat/conf/logging.properties in such way:
handlers = java.util.logging.ConsoleHandler
.handlers = java.util.logging.ConsoleHandler
...
java.util.logging.ConsoleHandler.formatter = co.elastic.logging.jul.EcsFormatter
Such configuration will make tomcat to write all events in appropriate Elastic (Logstash) JSON format into console (stdout) which is very suitable when you running in docker or other container.
Documentation https://www.elastic.co/guide/en/ecs-logging/java/1.x/setup.html
I would do the following:
Download and build devatherock's jul json formatter
Download json-simple
Add both to your tomcat startup classpath in the $CATALINA_BASE/bin/setenv.sh script
In $CATALINA_BASE/conf/logging.properties, add or change the formatters for your log handlers to io.github.devatherock.json.formatter.JSONFormatter
I'm trying to get a samba to work with a windows AD. And i can't use my shares through samba.
My smb.conf
#GLOBAL PARAMETERS
[global]
workgroup = MY_DOMAIN
realm = MY_DOMAIN.COM
preferred master = no
server string = Linux Test Machine
security = ADS
encrypt passwords = yes
password server = MY_MASTER_DOMAIN_CONTROLLER
log level = 3
log file = /var/log/samba/%m
max log size = 50
printcap name = cups
printing = cups
winbind enum users = Yes
winbind enum groups = Yes
winbind use default domain = Yes
winbind nested groups = Yes
winbind separator = +
idmap uid = 1100-20000
idmap gid = 1100-20000
;template primary group = "Domain Users"
template shell = /bin/bash
[homes]
comment = Home Direcotries
valid users = %S
read only = No
browseable = No
[tmp]
comment = Directory for storing pictures by jims users
path= /var/tmp
Valid Users = #"MY_DOMAIN+group name" MY_DOMAIN+MY_ACCOUNT
; public=no
writable=yes
browseable=yes
read only = no
guest ok = no
create mask = 0777
directory mask = 0777
wbinfo -u and wbinfo -g work as expected. kinit MY_ACCOUNT#MY_DOMAIN.COM works too.
But i can't connect to samba. I'm using debian 5, samba 3.2.5 and kerberos 5. My /var/www is 777. Any ideas ?
You are missing a backend to connect to AD.
idmap config MY_DOMAIN:default = true
idmap config MY_DOMAIN:schema-mode = rfc2307
idmap config MY_DOMAIN:range = 10000-49999
idmap config MY_DOMAIN:backend = ad
idmap config * : backend = tdb
idmap config * : range = 50000-99999
winbind nss info = rfc2307
We have Samba setup for our shared drive. I have pasted the smb.conf file below. Everything is working well accept when we try and run an EXE file using Windows Vista. When we run an EXE file it first ask for UAC control then it pops up the username and password prompt. You must then type your username and password in again before it will run.
I think the issues is that UAC is now running the application under Admin instead of the logged in user. So the first username and password that is cached is not seen by the admin user. Does anyone know of a work around for this?
smb.conf:
[global]
passdb backend = tdbsam
security = user
encrypt passwords = yes
preferred master = Yes
workgroup = Workgroup
netbios name = Omni
bind interfaces only = True
interfaces = lo eth2
;max disk size = 990000 ;some programs (like PS7) can't deal with more than 1TB
socket options = TCP_NODELAY
server string = Omni
;smb ports = 139
debuglevel = 1
syslog = 0
log level = 2
log file = /var/log/samba/%U.log
max log size = 61440
vfs objects = omnidrive recycle
recycle:repository = RecycleBin/%U
recycle:keeptree = Yes
recycle:touch = No
recycle:versions = Yes
recycle:maxsize = 0
recycle:exclude = *.temp *.mp3 *.cat
omnidrive:log = 2
omnidrive:com_log = 1
omnidrive:vscan = 1
omnidrive:versioningState = 1
omnidrive:versioningMaxFileSize = 0
omnidrive:versioningMaxRevSize = 7168
omnidrive:versioningMaxRevNum = 1000
omnidrive:versioningMinRevNum = 0
omnidrive:versioningfilesInclude = /*.doc/*.docx/*.xls/*.xlsx/*.txt/*.bmp/
omnidrive:versioningfilesExclude = /*.tmp/*.temp/*.exe/*.com/*.jarr/*.bat/.*/
full_audit:failure = none
full_audit:success = mkdir rename unlink rmdir write open close
full_audit:prefix = %u|%I|%m|%S
full_audit:priority = NOTICE
full_audit:facility = LOCAL6
;dont descend = RecycleBin
veto files = /.subversion/*.do/*.do/*.bar/*.cat/
client ntlmv2 auth = yes
[netlogon]
path = /var/lib/samba/netlogon
read only = yes
[homes]
read only = yes
browseable = no
[share1]
path = /share1
read only = no
browseable = yes
writable = yes
admin users = clinton1
public = no
create mask = 0770
directory mask = 0770
nt acl support = no
;acl map full control = no
hide unreadable = yes
store dos attributes = yes
map archive = no
map readonly = Permissions
If anyone cares; this is how I fixed the issues on vista:
I set a key to link the UAC account and the none UAC account.
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System
EnableLinkedConnections =(dword)1
The password prompt goes away.
I think that you can also address this by turning off UAC in Vista or Windows 7. Here's a link for doing that: Turn User Account Control on or off