Retention Fail with Bareos - configuration

I am encountering backup disk saturation issues in production.
Directory to backup = 400Go
Backup disk = 2To
Scheduling:
A Full backup every 2 days
An Incremental backup every hour
Retention duration of 2 days
I have recreated these issues in prototypes.
Directory to backup = 400Mo
Size of the backup directory = 48Go
Scheduling:
A Full backup every hour
An Incremental backup every 15 minutes
Retention duration of 2 hours
The issue is that :
After 5 days, the prototype backup directory is 48Go (which accurately reproduces the fact that the 2To production backup disk is saturating).
My observations are:
The backup volumes remain on the disk.
File size in the backup directory
It appears that the retention duration defined in the Pool configuration is not being taken into account.
Here is an example of a Backup still present in the Volumes
Data that should no longer exist
My tests :
Adding to the storage Pool:
File Retention = 2 hours
Recycle Oldest Volume = yes
Result: without success
Here is the prototype's configuration file:
Job {
Name = "backup-Client2"
Type = Backup
Level = Incremental
Client = Client2
FileSet = "FileSet_Client2"
Schedule = "Schedule_Client2"
Storage = Storage_Client2
Messages = Standard
Pool = Incremental_Client2
Priority = 1
Write Bootstrap = "/var/lib/bareos/%c.bsr"
Full Backup Pool = Full_Client2
Incremental Backup Pool = Incremental_Client2
}
FileSet {
Name = "FileSet_Client2"
Description = "Sauvegarde de tous les répertoires présents dans /home et /etc"
Include {
Options {
signature = MD5
}
File = "/home"
File = "/DATA"
File = "/etc"
}
}
Schedule {
Name = "Schedule_Client2"
Run = Full hourly
Run = Incremental hourly at 0:15
Run = Incremental hourly at 0:30
Run = Incremental hourly at 0:45
}
Client {
Name = "Client2"
Address = "192.168.1.78"
Password = "*****"
}
Storage {
Name = Storage_Client2
Address = 192.168.1.78
Password = "******"
Device = Client2FileStorage
Media Type = File
}
Pool {
Name = Incremental_Client2
Pool Type = Backup
Recycle = yes
AutoPrune = yes
Action On Purge = Truncate
Volume Retention = 1 hours
File Retention = 1 hours
Recycle Oldest Volume = yes
Label Format = "Incremental_Client2-"
}
Pool {
Name = Full_Client2
Pool Type = Backup
Recycle = yes
AutoPrune = yes
Action On Purge = Truncate
Volume Retention = 2 hours
File Retention = 2 hours
Recycle Oldest Volume = yes
Label Format = "Full_Client2-"
}
Did I forget a parameter?
Thank you to whoever can give me some solution leads.

Related

restart of the smbd daemon without interrupting the load on the windows client

Such a problem, there is a server (cluster) on which smb is used, the server is entered into the AD domain, sometimes it is necessary to restart the smbd service (reload won't fit), but at the same time there is some copying of the file on the client (windows), then the load is interrupted, and after the klick "Retry" button, the download starts from the very beginning. Is it possible to do something like that so that the load continues to go from the moment where it was interrupted, maybe you need to configure the client like that. client connects as SMBv3 or SMBv2
server on ubuntu 18.04.
smb created at zfs
smb.conf:
[global]
workgroup = TEST247
realm = test247.ru
security = ads
auth methods = winbind
interfaces = 172.16.11.170/24
bind interfaces only = yes
netbios name = SERVER
encrypt passwords = true
map to guest = Bad User
max log size = 300
dns proxy = no
socket options = TCP_NODELAY
domain master = no
local master = no
preferred master = no
os level = 0
domain logons = no
load printers = no
show add printer wizard = no
log level = 0 vfs:2
max log size = 0
syslog = 0
printcap name = /dev/null
disable spoolss = yes
name resolve order = lmhosts wins host bcast
machine password timeout = 604800
name cache timeout = 660
idmap config TEST247 : backend = rid
idmap config TEST247 : base_rid = 0
idmap config TEST247 : range = 100000 - 200000
idmap config * : range = 200001-300000
idmap config * : backend = tdb
idmap cache time = 604800
idmap negative cache time = 60
winbind rpc only = yes
winbind cache time = 120
winbind enum groups = yes
winbind enum users = yes
winbind max domain connections = 10
winbind use default domain = yes
winbind refresh tickets = yes
winbind reconnect delay = 15
winbind request timeout = 25
winbind separator = ^
private dir = /var/lib/samba/private
lock directory = /run/samba
state directory = /var/lib/samba
cache directory = /var/cache/samba
pid directory = /run/samba
log file = /var/log/samba/smb.%m
include = /etc/samba/smb-res.conf
testparm:
testparm -s /etc/samba/smb.conf
Load smb config files from /etc/samba/smb.conf
WARNING: The "auth methods" option is deprecated
WARNING: The "syslog" option is deprecated
Loaded services file OK.
Server role: ROLE_DOMAIN_MEMBER
smb-res.conf:
[test109_smb]
comment = test109_smb share
path = /config/pool/test109/smb
browseable = yes
writable = yes
inherit acls = yes
inherit owner = no
inherit permissions = yes
map acl inherit = yes
nt acl support = yes
create mask = 0777
force create mode = 0777
force directory mode = 0777
store dos attributes = yes
public = no
admin users =
valid users =
write list =
read list =
invalid users =
vfs objects = acl_xattr
full_audit:prefix = %S|%u|%I
full_audit:facility = local5
full_audit:priority = notice
full_audit:success = none
full_audit:failure = none
shadow: snapdir = .zfs/snapshot
shadow: sort = desc
shadow: localtime = yes
shadow: format = shadow_%d.%m.%Y-%H:%M:%S
worm: grace_period = 30
cryptfile: method = grasshopper
Resuming a copy operation doesn't depend on the smb client or server, but on the application which is doing the copying.
The standard Windows copy doesn't know to resume.
Other (third party) apps (maybe Total Commander?) can be more intelligent about it. You could even write your own app to do a smart copy.

Xmx settings in elasticbean stalk through environment properties

I had been trying to up the memory of my elastic beanstalk console using JAVA_OPTS in environment settings with values -Xms1G -Xmx3G. Attached is the image on how I have changed the settings.
AFter applying the changes and restarting the vm, I do not see the changes refelcted on the server.
This is how I am verifying
sudo jmap -heap
Heap Configuration:
MinHeapFreeRatio = 0
MaxHeapFreeRatio = 100
MaxHeapSize = 1035993088 (988.0MB)
NewSize = 21495808 (20.5MB)
MaxNewSize = 344981504 (329.0MB)
OldSize = 43515904 (41.5MB)
NewRatio = 2
SurvivorRatio = 8
MetaspaceSize = 21807104 (20.796875MB)
CompressedClassSpaceSize = 1073741824 (1024.0MB)
MaxMetaspaceSize = 17592186044415 MB
G1HeapRegionSize = 0 (0.0MB)
Heap Usage:
PS Young Generation
Eden Space:
capacity = 192413696 (183.5MB)
used = 18710296 (17.843528747558594MB)
free = 173703400 (165.6564712524414MB)
9.723993867879342% used
From Space:
capacity = 26738688 (25.5MB)
used = 22166296 (21.139427185058594MB)
free = 4572392 (4.360572814941406MB)
82.89971445121017% used
To Space:
capacity = 27262976 (26.0MB)
used = 0 (0.0MB)
free = 27262976 (26.0MB)
0.0% used
PS Old Generation
capacity = 691011584 (659.0MB)
used = 571332904 (544.8655166625977MB)
free = 119678680 (114.13448333740234MB)
Heap settings cannot be set through environment properties. You have to give this via Procfile. The procfile has tobe bundled when uplaoding.
I had to created a zip file that had war and Procfile.
Proc file contents
web: java -jar -Xms1G -Xmx3G application.war
How to test this works?
Find the process id of your webapp/java process from top.
Use jmap heap - to get the heap allocation. I tested this on AWS-Ec2 for elastic beanstalk

django and celery beat scheduler no database entries

my problem is that the beat scheduler doesn't store entries in the table 'tasks' and 'workers'. i use django and celery. in my database (MySQL) i have added a periodic tast "Estimate Region" with Interval 120 seconds.
this is how i start my worker:
`python manage.py celery worker -n worker.node1 -B --loglevel=info &`
after i started the worker i can see in the terminal that the worker works and the scheduler picks out the periodic task from the database and operates it.
how my task is defined:
#celery.task(name='fv.tasks.estimateRegion',
ignore_result=True,
max_retries=3)
def estimateRegion(region):
terminal shows this:
WARNING ModelEntry: Estimate Region fv.tasks.estimateRegion(*['ASIA'], **{}) {<freq: 2.00 minutes>}
[2013-05-23 10:48:19,166: WARNING/MainProcess] <ModelEntry: Estimate Region fv.tasks.estimateRegion(*['ASIA'], **{}) {<freq: 2.00 minutes>}>
INFO Calculating estimators for exchange:Bombay Stock Exchange
the task "estimate region" returns me a results.csv file, so i can see that the worker and the beat scheduler works. But after that i have no database entries in "tasks" or "workers" in my django admin panel.
Here are my celery settings in settings.py
` CELERY_DISABLE_RATE_LIMITS = True
CELERY_TASK_SERIALIZER = 'pickle'
CELERY_RESULT_SERIALIZER = 'pickle'
CELERY_IMPORTS = ('fv.tasks')
CELERY_RESULT_PERSISTENT = True
# amqp settings
BROKER_URL = 'amqp://fv:password#localhost'
#BROKER_URL = 'amqp://fv:password#192.168.99.31'
CELERY_RESULT_BACKEND = 'amqp'
CELERY_TASK_RESULT_EXPIRES = 18000
CELERY_ROUTES = (fv.routers.TaskRouter(), )
_estimatorExchange = Exchange('estimator')
CELERY_QUEUES = (
Queue('celery', Exchange('celery'), routing_key='celery'),
Queue('estimator', _estimatorExchange, routing_key='estimator'),
)
# beat scheduler settings
CELERYBEAT_SCHEDULER = "djcelery.schedulers.DatabaseScheduler"
# development settings
CELERY_RESULT_PERSISTENT = False
CELERY_DEFAULT_DELIVERY_MODE = 'transient'`
i hope anyone can help me :)
Have you started celerycam?
python manage.py celerycam
It will take a snapshot (every 1 second by default) of the current state of tasks.
You can read more about it in the celery documentation

How can i get flume-ng to store logs in JSON format

I have a Flume consolidator that writes an entry from a custom log to s3 bucket in AWS
the problem i am having is, it is not storing it in JSON format. I am using flume-ng (flume 1.2.0) as i have upgraded from flume-og (really just flume 0.9.4-cdh3u3). When i was using flume (og one), i had it default to moving logs in JSON format without any params set. Is it possible for flume-ng to parse log and set it to JSON format?
Any help is much appreciated. Thank you
my setup config is below
agent.sources = source1
agent.sinks = sink1
agent.channels = channel1
agent.sources.source1.type = netcat
agent.sources.source1.bind = localhost
agent.sources.source1.port = 4555
agent.sinks.sink1.type=hdfs
agent.sinks.sink1.hdfs.path = s3://KEY:SECRET#BUCKET/flume/apache/incoming
agent.sinks.sink1.hdfs.filePrefix = log-file-
agent.channels.channel1.type = memory
agent.channels.channel1.capacity = 1000
agent.channels.channel1.transactionCapactiy = 100
agent.sources.source1.channels = channel1
agent.sinks.sink1.channel = channel1

Vista UAC Issues with samba and Admin Credentials

We have Samba setup for our shared drive. I have pasted the smb.conf file below. Everything is working well accept when we try and run an EXE file using Windows Vista. When we run an EXE file it first ask for UAC control then it pops up the username and password prompt. You must then type your username and password in again before it will run.
I think the issues is that UAC is now running the application under Admin instead of the logged in user. So the first username and password that is cached is not seen by the admin user. Does anyone know of a work around for this?
smb.conf:
[global]
passdb backend = tdbsam
security = user
encrypt passwords = yes
preferred master = Yes
workgroup = Workgroup
netbios name = Omni
bind interfaces only = True
interfaces = lo eth2
;max disk size = 990000 ;some programs (like PS7) can't deal with more than 1TB
socket options = TCP_NODELAY
server string = Omni
;smb ports = 139
debuglevel = 1
syslog = 0
log level = 2
log file = /var/log/samba/%U.log
max log size = 61440
vfs objects = omnidrive recycle
recycle:repository = RecycleBin/%U
recycle:keeptree = Yes
recycle:touch = No
recycle:versions = Yes
recycle:maxsize = 0
recycle:exclude = *.temp *.mp3 *.cat
omnidrive:log = 2
omnidrive:com_log = 1
omnidrive:vscan = 1
omnidrive:versioningState = 1
omnidrive:versioningMaxFileSize = 0
omnidrive:versioningMaxRevSize = 7168
omnidrive:versioningMaxRevNum = 1000
omnidrive:versioningMinRevNum = 0
omnidrive:versioningfilesInclude = /*.doc/*.docx/*.xls/*.xlsx/*.txt/*.bmp/
omnidrive:versioningfilesExclude = /*.tmp/*.temp/*.exe/*.com/*.jarr/*.bat/.*/
full_audit:failure = none
full_audit:success = mkdir rename unlink rmdir write open close
full_audit:prefix = %u|%I|%m|%S
full_audit:priority = NOTICE
full_audit:facility = LOCAL6
;dont descend = RecycleBin
veto files = /.subversion/*.do/*.do/*.bar/*.cat/
client ntlmv2 auth = yes
[netlogon]
path = /var/lib/samba/netlogon
read only = yes
[homes]
read only = yes
browseable = no
[share1]
path = /share1
read only = no
browseable = yes
writable = yes
admin users = clinton1
public = no
create mask = 0770
directory mask = 0770
nt acl support = no
;acl map full control = no
hide unreadable = yes
store dos attributes = yes
map archive = no
map readonly = Permissions
If anyone cares; this is how I fixed the issues on vista:
I set a key to link the UAC account and the none UAC account.
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System
EnableLinkedConnections =(dword)1
The password prompt goes away.
I think that you can also address this by turning off UAC in Vista or Windows 7. Here's a link for doing that: Turn User Account Control on or off