I configured the OSSEC by following the procedure from https://blog.rapid7.com/2017/06/30/how-to-install-and-configure-ossec-on-ubuntu-linux/ this site. but after configuration, when I tried /var/ossec/bin/ossec-control restart
I got
ossec-monitord not running ..
ossec-logcollector not running ..
ossec-remoted not running ..
ossec-syscheckd not running ..
ossec-analysisd not running ..
ossec-maild not running ..
ossec-execd not running ..
OSSEC HIDS v2.9.0 Stopped
Starting OSSEC HIDS v2.9.0 (by Trend Micro Inc.)...
OSSEC analysisd: Testing rules failed. Configuration error. Exiting.
In logtest, I got
Error reading XML file '/var/ossec/etc/ossec.conf': XMLERR: Element 'syscheck' not closed. (line 252).
2018/05/22 15:20:59 ossec-testrule(1202): ERROR: Configuration error at '/var/ossec/etc/ossec.conf'. Exiting.
where can I solve the problem?
You have to close the tag in your config file,
Edit ossec.conf :
Type <\syscheck> where you opend it.
Related
Jenkins pipeline is building Docker images. OpenShift plugin(s) are used for the same.
An example command:
openshift.selector(BUILD_CONFIG_NAME, "${appBcName}").startBuild("--from-dir=${artifactPath}", '--wait','--follow')
While this works smoothly most of the time, whenever this command fails due to some underlying platform issues, almost no information is seen in the Jenkins build job console:
[Pipeline] }
[start-build:buildconfig/amld5-car-reporting-spacetime-ubi-openshift-java-runtimejd] ............................................................
[start-build:buildconfig/amld5-car-reporting-spacetime-ubi-openshift-java-runtimejd] Uploading finished
[start-build:buildconfig/amld5-car-reporting-spacetime-ubi-openshift-java-runtimejd] Error from server (BadRequest): unable to wait for build amld5-car-reporting-spacetime-ubi-openshift-java-runtimejd-857 to run: timed out waiting for the condition
[Pipeline] }
ERROR: Error running start-build on at least one item: [buildconfig/amld5-car-reporting-spacetime-ubi-openshift-java-runtimejd];
{err=, verb=start-build, cmd=oc --server=https://api.scp-west-zone02-z01.net:6443 --certificate-authority=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt --namespace=sb-1166-amld5-car-service-se --token=XXXXX start-build buildconfig/amld5-car-reporting-spacetime-ubi-openshift-java-runtimejd --from-dir=./build/libs --wait --follow -o=name , out=Uploading directory "build/libs" as binary input for the build ...
............................................................
Uploading finished
Error from server (BadRequest): unable to wait for build amld5-car-reporting-spacetime-ubi-openshift-java-runtimejd-857 to run: timed out waiting for the condition
, status=1}
[Pipeline] // catchError
I need more verbosity, detailed error information. I checked the start-build command reference, and I thought --build-loglevel [0-5] might help here. When I used it, I got a warning that since I am using source type as 'Binary' in the BuildConfig, logging isn't supported(seriously???)
NOTE: the selector returned when -F/--follow is supplied to startBuild() will be inoperative for the various selector operations.
Consider removing those options from startBuild and using the logs() command to follow the build output.
[start-build:buildconfig/casc-docs-spacetime-ubi-openshift-java-runtimeadoptopenjdk] WARNING: Specifying --build-loglevel with binary builds is not supported.
[start-build:buildconfig/casc-docs-spacetime-ubi-openshift-java-runtimeadoptopenjdk] WARNING: Specifying environment variables with binary builds is not supported.
[start-build:buildconfig/casc-docs-spacetime-ubi-openshift-java-runtimeadoptopenjdk] Uploading directory "build/libs" as binary input for the build ...
[start-build:buildconfig/casc-docs-spacetime-ubi-openshift-java-runtimeadoptopenjdk] ..
How do I get more logs, info. while executing the start-build command?
I was facing the same problem, I just used something like:
def build = openshift.selector(BUILD_CONFIG_NAME, "${appBcName}").startBuild("--from-dir=${artifactPath}", '--wait','--follow')
build.logs('-f')
And so far it seems to work, I got the logs from my openshift build in my Jenkins pipeline. Now I'll try to get the logs only if build does not Complete, to reduce the overall logs.
(for future searchers like me ^^)
I'm hoping someone can help me, i've been working on a website using wordpress for the past 3 months on my mac, only when i tried to access it this past weekend the mysql service won't run and i keep getting the error
ERROR: Failed to start "mysql": cannot start service: Process exited with status 3
Can anyone help me get my site up and running again?
This is the first tie using Wordpress so i'm not really sure what I'm doing.
Any help would be greatly appreciated
i've managed to run mysql in safe mode and this is the code it gives
Last login: Mon Jul 13 22:23:09 on ttys001
Lukes-iMac:~ lukejackson$/Users/lukejackson/.bitnami/stackman/machines/wordpress/volumes/root/mysql/bin/mysqld_safe ; exit;
/Users/lukejackson/.bitnami/stackman/machines/wordpress/volumes/root/mysql/bin/my_print_defaults: line 12: /Users/lukejackson/.bitnami/stackman/machines/wordpress/volumes/root/mysql/bin/my_print_defaults.bin: cannot execute binary file
/Users/lukejackson/.bitnami/stackman/machines/wordpress/volumes/root/mysql/bin/my_print_defaults: line 12: /Users/lukejackson/.bitnami/stackman/machines/wordpress/volumes/root/mysql/bin/my_print_defaults.bin: Undefined error: 0
/Users/lukejackson/.bitnami/stackman/machines/wordpress/volumes/root/mysql/bin/my_print_defaults: line 12: /Users/lukejackson/.bitnami/stackman/machines/wordpress/volumes/root/mysql/bin/my_print_defaults.bin: cannot execute binary file
/Users/lukejackson/.bitnami/stackman/machines/wordpress/volumes/root/mysql/bin/my_print_defaults: line 12: /Users/lukejackson/.bitnami/stackman/machines/wordpress/volumes/root/mysql/bin/my_print_defaults.bin: Undefined error: 0
/Users/lukejackson/.bitnami/stackman/machines/wordpress/volumes/root/mysql/bin/mysqld_safe: line 674: /opt/bitnami/mysql/data/Lukes-iMac.err: No such file or directory
Logging to '/opt/bitnami/mysql/data/Lukes-iMac.err'.2020-07-13T21:43:10.6NZ mysqld_safe Starting mysqld daemon with databases from /opt/bitnami/mysql/data/Users/lukejackson/.bitnami/stackman/machines/wordpress/volumes/root/mysql/bin/mysqld_safe: line 144: /opt/bitnami/mysql/data/Lukes-iMac.err: No such file or directory
/Users/lukejackson/.bitnami/stackman/machines/wordpress/volumes/root/mysql/bin/mysqld_safe: line 199: /opt/bitnami/mysql/data/Lukes-iMac.err: No such file or directory
/Users/lukejackson/.bitnami/stackman/machines/wordpress/volumes/root/mysql/bin/mysqld_safe: line 937: /opt/bitnami/mysql/data/Lukes-iMac.err: No such file or directory
2020-07-13T21:43:10.6NZ mysqld_safe mysqld from pid file /opt/bitnami/mysql/data/Lukes-iMac.pid ended
/Users/lukejackson/.bitnami/stackman/machines/wordpress/volumes/root/mysql/bin/mysqld_safe: line 144: /opt/bitnami/mysql/data/Lukes-iMac.err: No such file or directory
logout
Saving session...
...copying shared history...
...saving history...truncating history files...
...completed.
[Process completed]
Your local database server is failing to start up, and it's being started with enough layers between you and it that any reporting it's doing on why is being hidden. To attempt to start it while being able to see what it's complaining about, try the methodology here: start MySQL server from command line on Mac OS Lion
Bitnami Engineer here.
Thank you for using our solution. Please note that the OS X VM solution is a VM we build and configure with the application. You need to open the Console of the VM when running any command, can you try to open the console and run the start command?
sudo /opt/bitnami/ctlscript.sh start
You can use the ctlscript.sh file to stop the services and get the status as well. If the database can't be started, you can take a look at the database's log file (/opt/bitnami/mysql/data/mysqld.log) to get more information
sudo tail -n 30 /opt/bitnami/mysql/data/mysqld.log
I have 4 Elastic Beanstalk deployments: 3 are Corretto 8 and the other one is Corretto 11.
On the Corretto 8 deployments, I can set new configuration without issue. On the Corretto 11 instance, however, any attempt to set a new configuration fails and causes a rollback.
The Corretto versions might not be the problem, but it's the only difference I can see. All 4 apps are Spring Boot apps that run as web servers (i.e embedded tomcat with exposed web ports). I am trying to set the exact same configuration name and value, and it only fails on the one instance.
The configuration I'm trying to set is pretty simple:
VALIDATE_RENEWALS = true
Even just trying to set DEBUG = true causes a failure and rollback.
I don't see a lot of information from the console about what's failing. Here is the event log:
2020-03-16 13:55:17 UTC-0600 INFO The environment was reverted to the previous configuration setting.
2020-03-16 13:54:45 UTC-0600 ERROR During an aborted deployment, some instances may have deployed the new application version. To ensure all instances are running the same version, re-deploy the appropriate application version.
2020-03-16 13:54:45 UTC-0600 ERROR Failed to deploy configuration.
2020-03-16 13:54:45 UTC-0600 ERROR Unsuccessful command execution on instance id(s) 'i-00553f4ac36afd327'. Aborting the operation.
2020-03-16 13:54:45 UTC-0600 INFO Command execution completed on all instances. Summary: [Successful: 0, Failed: 1].
2020-03-16 13:54:45 UTC-0600 ERROR [Instance: i-00553f4ac36afd327] Command failed on instance. An unexpected error has occurred [ErrorCode: 0000000001].
2020-03-16 13:54:20 UTC-0600 INFO Updating environment XXX's configuration settings.
2020-03-16 13:54:15 UTC-0600 INFO Environment update is starting.
I've also downloaded the full set of logs for the instance and don't see anything obvious. The app stdout doesn't have any errors or exceptions, it just starts normally and then gets terminated. None of the other log files have messages around the times above, so I'm really not sure what else I can look at.
Edit
The times don't line up but I do see this in eb-engine.log file:
2020/03/16 17:54:38.508634 [INFO] checking whether command is applicable to this instance...
2020/03/16 17:54:38.508658 [INFO] this command is applicable to the instance, thus instance should execute command
2020/03/16 17:54:38.508665 [INFO] check whether this is an enhanced env...
2020/03/16 17:54:38.508794 [INFO] Executing instruction: StageJavaApplication
2020/03/16 17:54:38.508858 [ERROR] GetArchivedFileType with file /opt/elasticbeanstalk/deployment/app_source_bundle failed with error open /opt/elasticbeanstalk/deployment/app_source_bundle: no such file or directory
2020/03/16 17:54:38.508868 [ERROR] An error occurred during execution of command [config-deploy] - [StageJavaApplication]. Stop running the command. Error: staging java app failed with error GetArchivedFileType with file /opt/elasticbeanstalk/deployment/app_source_bundle failed with error open /opt/elasticbeanstalk/deployment/app_source_bundle: no such file or directory
I have node.js app and during the deployment after installing dependencies the following error had occured:
error: Execution of post execute step failed
warning: Failed to remove container "a167df5e218c392e42ec772d5c22311f88043ff99c71ce1a08e7af535ac3817b": Error response from daemon: {"message":"Driver devicemapper failed to remove root filesystem a167df5e218c392e42ec772d5c22311f88043ff99c71ce1a08e7af535ac3817b: Device is Busy"}
error: build error: building my-pokus/hello-seattle-2:d4b8ecde failed when committing the image due to error: Cannot connect to the Docker daemon. Is the docker daemon running on this host?
So that happens when node that the build is scheduled on, is being shutdown or restarted. Try to re-spin the build so the scheduler will, put it on an available node.
Should work :)
The problem is that your image is too big, if the commit is longer than 2mins, this error happens.
I found a workaround here : github origin 13515
Shrink your Docker Image :)
Use a more recent S2I-Builder:
In order to temporarily use another version of the docker image, the
easiest way seems to be to simply pull the new image and tag it as the
one used by openshift:
docker pull docker.io/openshift/origin-sti-builder:v1.5.0-rc.0
docker tag docker.io/openshift/origin-sti-builder:v1.5.0-rc.0 docker.io/openshift/origin-sti-builder:v1.4.1
I have a problem with the pool connections of pax-jdbc in karaf, I'm trying to inject a Mysql DataSource (DS) through
blueprint.xml into my project, for test it, I have built a karaf command where injects the DS into karaf command class
and execute a query with that connection. That it's OK, but the problem is when I execute the command a lot times, for
each execution a new instance of the DS is created and the pool connection cannot open new connections to MySQL, because
the pool had reached the limit.
I have uploaded my code to github in this link: https://github.com/christmo/karaf-pax-jdbc , you can give a pull request
if you find an error in this project.
For test this project you can do:
1. Download karaf 4.0.4 or apache-karaf-4.1.0-SNAPSHOT
2. Copy the file karaf-pax-jdbc/etc/org.ops4j.datasource-my-ds.cfg to ${karaf}/etc, this file have the mysql
configuration change with your mysql configuration data.
4. Start mysql database engine
3. Start karaf -> cd ${karaf}/bin/; ./karaf
4. Add the repo of this project with this karaf command: feature:repo-add mvn:pax/features/1.0-SNAPSHOT/xml/features
5. Install the feature created for this project: feature:install mysql-test
6. Execute the command for test this problem: mysql-connection, this command only execute "Select 1" in mysql
If you execute 9 times this command "mysql-connection", it will freeze the prompt of karaf and if you interrupt the
execution you can get this exception:
java.sql.SQLException: Cannot get a connection, general error at
org.apache.commons.dbcp2.PoolingDataSource.getConnection(PoolingDataSource.java:146)
at com.twim.OrmCommand.execute(OrmCommand.java:53) at
org.apache.karaf.shell.impl.action.command.ActionCommand.execute(ActionCommand.java:83)
at
org.apache.karaf.shell.impl.console.osgi.secured.SecuredCommand.execute(SecuredCommand.java:67)
at
org.apache.karaf.shell.impl.console.osgi.secured.SecuredCommand.execute(SecuredCommand.java:87)
at org.apache.felix.gogo.runtime.Closure.executeCmd(Closure.java:480)
at
org.apache.felix.gogo.runtime.Closure.executeStatement(Closure.java:406)
at org.apache.felix.gogo.runtime.Pipe.run(Pipe.java:108) at
org.apache.felix.gogo.runtime.Closure.execute(Closure.java:182) at
org.apache.felix.gogo.runtime.Closure.execute(Closure.java:119) at
org.apache.felix.gogo.runtime.CommandSessionImpl.execute(CommandSessionImpl.java:94)
at
org.apache.karaf.shell.impl.console.ConsoleSessionImpl.run(ConsoleSessionImpl.java:270)
at java.lang.Thread.run(Thread.java:745) Caused by:
java.lang.InterruptedException at
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014)
at
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2048)
at
org.apache.commons.pool2.impl.LinkedBlockingDeque.takeFirst(LinkedBlockingDeque.java:583)
at
org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:442)
at
org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:363)
at
org.apache.commons.dbcp2.PoolingDataSource.getConnection(PoolingDataSource.java:134)
... 12 more
The problem in your code is in the line System.out.println("--DS--: " + ds.getConnection());.
There you create a connection but never close it. So with every call you drain the pool.