How to connect to a database created in using ANT - mysql

I need to connect to "mydb" database created in MySQL 5.5.
I figured out from http://ant.apache.org/manual/Tasks/sql.html that following should do the job, but ti does not.
<sql
driver="com.mysql.jdbc.Driver"
url="jdbc:mysql://localhost:3306/broadleaf"
userid="root"
password="password">
</sql>
Then in the other post that following could be used to start and stop MySQL using ANT:
<target name="start-db">
<exec executable="C:\Program Files\MySQL\MySQL Server 5.5\bin\mysqld" osfamily="windows">
</exec>
<exec executable="mysql.server" osfamily="unix">
<arg value="start"/>
</exec>
</target>
<target name="stop-db">
<exec executable="C:\Program Files\MySQL\MySQL Server 5.5\bin\mysqld" osfamily="windows">
<arg value="-u"/>
<arg value="root"/>
<arg value="shutdown"/>
</exec>
<exec executable="mysql.server" osfamily="unix">
<arg value="stop"/>
</exec>
</target>
Could someone tell me how to glue both these scripts together to start MySQL database and then connect to a particular database (ex. mydb) using an ANT script? And similarly stop the database and disconnect from that database (mysql).
Thanks.

Are you asking how to tie everything together in a complete ANT script?
<project name="database-stuff" default="make-it-so">
<target name="make-it-so" depends="start-db,run-sql,stop-db"/>
<target name="start-db">
<exec executable="C:\Program Files\MySQL\MySQL Server 5.5\bin\mysqld" osfamily="windows">
</exec>
<exec executable="mysql.server" osfamily="unix">
<arg value="start"/>
</exec>
</target>
<target name="stop-db">
<exec executable="C:\Program Files\MySQL\MySQL Server 5.5\bin\mysqld" osfamily="windows">
<arg value="-u"/>
<arg value="root"/>
<arg value="shutdown"/>
</exec>
<exec executable="mysql.server" osfamily="unix">
<arg value="stop"/>
</exec>
</target>
<target name="run-sql">
<sql driver="com.mysql.jdbc.Driver"
url="jdbc:mysql://localhost:3306/broadleaf"
userid="root"
password="password">
-- SQL STATEMENTS GO HERE!!
</sql>
</target>
</project>
If not you'll have to provide more details of the kind of error you're experiencing.

Related

How do I get Keycloak to connect to MySQL DB?

I've been crawling a number of sites like this trying to get Keycloak working with a MySQL persistence layer. I am using docker, but I'm using my own images so it pulls passwords and other sensitive data from a secrets manager instead of environment variables or Docker secrets. The images are pretty close to stock besides that however.
Anyway, I have a MySQL 8 container up and running, and from within the Keycloak 12.0.3 container I can connect to the MySQL container fine:
# mysql -h mysql -u keycloak --password=somethingtochangelater -D keycloak -e "SHOW DATABASES;"
mysql: [Warning] Using a password on the command line interface can be insecure.
+--------------------+
| Database |
+--------------------+
| information_schema |
| keycloak |
+--------------------+
So there's no problems of connectivity between the instances, and that username/password has access to the keycloak database fine.
So then I ran several commands to configure the Keycloak instance (keycloak is installed at /opt/myco/bin/keycloak):
/opt/myco/bin/keycloak/bin/standalone.sh &
# Pausing for server startup
sleep 20
# Add mysql module - JDBC driver unpacked at /opt/myco/bin/keycloak-install/mysql-connector-java-8.0.23/mysql-connector-java-8.0.23.jar
/opt/myco/bin/keycloak/bin/jboss-cli.sh --connect --command="module add --name=com.mysql --dependencies=javax.api,javax.transaction.api --resources=/opt/myco/bin/keycloak-install/mysql-connector-java-8.0.23/mysql-connector-java-8.0.23.jar --module-root-dir=/opt/myco/bin/keycloak/modules/system/layers/keycloak/"
# Removing h2 datasource
/opt/myco/bin/keycloak/bin/jboss-cli.sh --connect --command="/subsystem=datasources/data-source=KeycloakDS:remove"
# Adding MySQL datasource
/opt/myco/bin/keycloak/bin/jboss-cli.sh --connect --command="/subsystem=datasources/jdbc-driver=mysql:add(driver-name=mysql,driver-module-name=com.mysql,driver-class-name=com.mysql.cj.jdbc.Driver)"
# TODO - add connection pooling options here...
# Configuring data source
/opt/myco/bin/keycloak/bin/jboss-cli.sh --connect --command="data-source add --name=KeycloakDS --jndi-name=java:jboss/datasources/KeycloakDS --enabled=true --password=somethingtochangelater --user-name=keycloak --driver-name=com.mysql --use-java-context=true --connection-url=jdbc:mysql://mysql:3306/keycloak?useSSL=false&characterEncoding=UTF-8"
# Testing connection
/opt/myco/bin/keycloak/bin/jboss-cli.sh --connect --command="/subsystem=datasources/data-source=KeycloakDS:test-connection-in-pool"
# Creating admin user
/opt/myco/bin/keycloak/bin/add-user-keycloak.sh -r master -u "admin" -p "somethingelse"
# Shutting down initial server
/opt/myco/bin/keycloak/bin/jboss-cli.sh --connect command=":shutdown"
This all appears to run fine. Note especially the test-connection-in-pool has no problems:
{
"outcome" => "success",
"result" => [true],
"response-headers" => {"process-state" => "reload-required"}
}
However, when I go to start the server back up again, it crashes with several exceptions, starting with:
22:31:52,484 FATAL [org.keycloak.services] (ServerService Thread Pool -- 56) Error during startup: java.lang.RuntimeException: Failed to connect to database
at org.keycloak.keycloak-model-jpa#12.0.3//org.keycloak.connections.jpa.DefaultJpaConnectionProviderFactory.getConnection(DefaultJpaConnectionProviderFactory.java:377)
at org.keycloak.keycloak-model-jpa#12.0.3//org.keycloak.connections.jpa.updater.liquibase.lock.LiquibaseDBLockProvider.lazyInit(LiquibaseDBLockProvider.java:65)
...
it keeps going, though I suspect that Exception ultimately to be fatal, and it eventually dies with:
22:31:53,114 ERROR [org.jboss.as.controller.management-operation] (ServerService Thread Pool -- 40) WFLYCTL0190: Step handler org.jboss.as.controller.AbstractAddStepHandler$1#33063168 for operation add at address [
("subsystem" => "jca"),
("workmanager" => "default"),
("short-running-threads" => "default")
] failed -- java.util.concurrent.RejectedExecutionException: java.util.concurrent.RejectedExecutionException
at org.jboss.threads#2.4.0.Final//org.jboss.threads.RejectingExecutor.execute(RejectingExecutor.java:37)
at org.jboss.threads#2.4.0.Final//org.jboss.threads.EnhancedQueueExecutor.rejectShutdown(EnhancedQueueExecutor.java:2029)
...
The module at /opt/myco/bin/keycloak/modules/system/layers/keycloak/com/mysql/main has the jar file and module.xml:
# ls
module.xml mysql-connector-java-8.0.23.jar
# cat module.xml
<?xml version='1.0' encoding='UTF-8'?>
<module xmlns="urn:jboss:module:1.1" name="com.mysql">
<resources>
<resource-root path="mysql-connector-java-8.0.23.jar"/>
</resources>
<dependencies>
<module name="javax.api"/>
<module name="javax.transaction.api"/>
</dependencies>
The standalone.xml file looks reasonable to me:
...
<subsystem xmlns="urn:jboss:domain:datasources:6.0">
<datasources>
...
<datasource jndi-name="java:jboss/datasources/KeycloakDS" pool-name="KeycloakDS" enabled="true" use-java-context="true">
<connection-url>jdbc:mysql://mysql:3306/keycloak?useSSL=false&characterEncoding=UTF-8</connection-url>
<driver>com.mysql</driver>
<security>
<user-name>keycloak</user-name>
<password>somethingtochangelater</password>
</security>
</datasource>
<drivers>
<driver name="h2" module="com.h2database.h2">
<xa-datasource-class>org.h2.jdbcx.JdbcDataSource</xa-datasource-class>
</driver>
<driver name="mysql" module="com.mysql">
<driver-class>com.mysql.cj.jdbc.Driver</driver-class>
</driver>
</drivers>
</datasources>
...
So.... anyone have any idea what's going on? What else do I need to do to get Keycloak talking properly to MySQL? Anything else I can do to debug what the issue is?
Not sure what is wrong with your particular case, but I used jboss/ keycloak image and it connects to MySQL just fine. Maybe you can derive your custom image from there. The full setup in my blog post https://link.medium.com/eK6IRducpeb
For standalone keycloak server you can try this command.
kc.bat start-dev --db postgres --db-url jdbc:postgresql://localhost:5432/keycloak-server --db-username postgres --db-password root

Connecting solr with aws RDS Mysql through data import handler

I recently started implementing solr-cloud on AWS EC2 for search applications. I have created 2 AWS Ec2 instances with the following configurations ---
EC2 Type - t2.medium
ram - 4GB
Disk Space - 8GB
OS - ubuntu 18.04
For the 2 EC2 instances, I have created a security group which allows all inbound traffic. NACL has default settings that allows all inbound traffic as well.
Steps Followed to install Apache Solr -
ssh into ec2 :
ssh -i "pem_file" ubuntu#ec2-public-ipv4-address
cd to /opt directory
run --> sudo apt-update
run --> sudo apt-get openjdk-11
Check java -version
run --> wget https://archive.apache.org/dist/lucene/solr/8.3.0/solr-8.3.0.tgz
run --> tar -xvzf solr-8.3.0.tgz
export SOLR_HOME=/opt/solr-8.3.0
Add /opt/solr-8.3.0 to Path environment variable
Update the sudo vim /etc/hosts file with the hosts --
a. public-ip-v4-address-of-ec2 solr-node-1
Started Solr using the following command -->
sudo bin/solr start -c -p 8983 -h solr-node-1 -force
Checked the opened ports using --> sudo lsof -i -P -n | grep LISTEN
Created collections, shards and replicas using --->
bin/solr create -c travasko -d sample_techproducts_configs -n travasko_configs -shards 2 -rf 2 -p 8983
I repeated the same process on the other EC2 machine and ran solr on it.
Now, to use the data import handler in solr, I edited the following files:
solrconfig.xml
<requestHandler name="/dataimport" class="org.apache.solr.handler.dataimport.DataImportHandler">
<lst name="defaults">
<str name="config">data-config.xml</str>
</lst>
</requestHandler>
data-config.xml
<dataConfig>
<dataSource type="JdbcDataSource"
driver="com.mysql.jdbc.Driver"
url="jdbc:mysql://examplerds.cuhj86yfdpid.us-east-1.rds.amazonaws.com:3306/TRAVASKODB1"
user="examplerds"
password="examplerds#123"/>
<document>
<entity name="MOMENTS"
pk="MOMENT_ID"
query="SELECT MOMENT_ID,MOMENT_TEXT FROM MOMENTS"
deltaImportQuery="SELECT MOMENT_ID,MOMENT_TEXT FROM MOMENTS WHERE MOMENT_ID='${dih.delta.MOMENT_ID}'"
deltaQuery="SELECT MOMENT_ID FROM MOMENTS WHERE LAST_MODIFIED > '${dih.last_index_time}'"
>
<field column="MOMENT_ID" name="MOMENT_ID"/>
<field column="MOMENT_TEXT" name="MOMENT_TEXT"/>
</entity>
</document>
</dataConfig>
managed_schema
<schema name="MOMENTS" version="1.5">
<field name="_version_" type="long" indexed="true" stored="true"/>
<field name="MOMENT_ID" type="integer" indexed="true" stored="true" required="true" multiValued="false" />
<field name="MOMENT_TEXT" type="string" indexed="true" stored="true" multiValued="false" />
</schema>
Downloaded mysql jdbc using the following command:
wget -q "http://search.maven.org/remotecontent?filepath=mysql/mysql-connector-java/5.1.32/mysql-connector-java-5.1.32.jar" -O mysql-connector-java.jar
Add to solrconfig.xml:
<lib dir="${solr.install.dir:../../../..}/dist/" regex="solr-dataimporthandler-.*\.jar" />
<lib dir="${solr.install.dir:../../../..}/dist/" regex="mysql-connector-java.jar" />
After editing the files above, I uploaded them to the solr-cloud using the following zookeper command -->
bin/solr zk -n travasko_config -z solr-node-1:9983 cp /opt/solr-8.3.0/server/solr/configsets/_default/conf/managed-schema zk:/configs/travasko_config/managed-schema
I then checked all the above files in the solr-cloud and could notice the changes i added.
The current issue is that when I select the collection I created above, and click on Dataimport, It throws an error as below --->
The solrconfig.xml file for this index does not have an operational DataImportHandler defined!
Note: The AWS RDS and EC2 instances are in the same VPC sharing the same Security Group.
So why is solrconfig.xml file throwing an error during dataimport ? What am i missing here?
The solution to the above issue was basically setting the java system property for solr versions greater than 8.2.0 as below:
-Denable.dih.dataConfigParam=true
This parameter can be set either in solr.in.cmd or solr.in.sh which can be found inside the directory below: ,
/opt/solr-8.3.0/bin
If, /opt/solr-8.3.0 is the installation directory of solr.
The other method was to pass this parameter as command line parameter while starting solr as below:
sudo bin/solr start -c -p 8983 -h solr-node-1 -Denable.dih.dataConfigParam=true -force
solr-node-1 is the public IPv4 address of the AWS Ec2 instance on which solr is configured.

Permanent changes to WildFly 10 configuration (standalone.xml)

How can I make these two Wildfly 10 configuration changes permanent?
max-parameters="4000"
<access-log />
If I write them to standalone.xml and restart Wildfly, they disappear.
<subsystem xmlns="urn:jboss:domain:undertow:3.1">
<buffer-cache name="default"/>
<server name="default-server">
<http-listener name="default" socket-binding="http" redirect-socket="https" max-parameters="4000" />
<host name="default-host" alias="localhost">
<location name="/" handler="welcome-content"/>
<access-log/>
<filter-ref name="server-header"/>
<filter-ref name="x-powered-by-header"/>
</host>
</server>
...
Harri
Shutdown the server before you manually edit standalone.xml, or edit it using the command line console if you want to set it on the fly.

Hadoop 2.3.0 Issue : -ls: For input string: "false"

I'm getting errors running simple hadoop fs commands. I'm on a Mac running OS X 10.10.5, I've configured hadoop as a standalone cluster.
$ hadoop fs -ls
2015-09-26 06:59:20,531 WARN [main] util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
-ls: For input string: "false"
Usage: hadoop fs [generic options] -ls [-d] [-h] [-R] [<path> ...]
$ hadoop fs -ls /
2015-09-26 07:26:16,629 WARN [main] util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
-ls: For input string: "false"
Usage: hadoop fs [generic options] -ls [-d] [-h] [-R] [<path> ...]
$ hadoop fs -mkdir /user/hadoop
2015-09-26 07:01:05,356 WARN [main] util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
-mkdir: For input string: "false"
Usage: hadoop fs [generic options] -mkdir [-p] <path> ...
I'm running a standalone hadoop 2.3.0 on OS X 10.10.5.
$ hadoop version
Hadoop 2.3.0
Subversion http://svn.apache.org/repos/asf/hadoop/common -r 1567123
Compiled by jenkins on 2014-02-11T13:40Z
Compiled with protoc 2.5.0
From source with checksum dfe46336fbc6a044bc124392ec06b85
This command was run using /Users/davidlaxer/hadoop-2.3.0/share/hadoop/common/hadoop-common-2.3.0.jar
$ java -version
java version "1.8.0_05"
Java(TM) SE Runtime Environment (build 1.8.0_05-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.5-b02, mixed mode)
I tried to fix the warning (so far without success). I suspect the warning is an unrelated issue:
WARN [main] util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
$ env | grep HADOOP
HADOOP_HOME=/Users/davidlaxer/hadoop-2.3.0
HADOOP_COMMON_LIB_NATIVE_DIR=/Users/davidlaxer/hadoop-2.3.0/lib/native
HADOOP_CONF_DIR=/Users/davidlaxer/hadoop-2.3.0/etc/hadoop/conf
HADOOP_OPTS=-Djava.library.path=/Users/davidlaxer/hadoop-2.3.0/lib
Here are my hadoop config files which are in /Users/davidlaxer/hadoop-2.3.0/etc/hadoop/conf:
$ cat hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>false</value>
</property>
</configuration>
$ cat core-site.xml
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
$ cat mapred_site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
$ cat yarn_site.xml
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>
$ jps
99664 NameNode
4997 Jps
2202 ZeppelinServer
2283 RemoteInterpreterServer
2158 JupyterScala
99006 SecondaryNameNode
Same issue with hadoop 2.6.1 (downloaded as binary):
$ bin/hadoop version
Hadoop 2.6.1
Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r b4d876d837b830405ccdb6af94742f99d49f9c04
Compiled by jenkins on 2015-09-16T21:07Z
Compiled with protoc 2.5.0
From source with checksum ba9a9397365e3ec2f1b3691b52627f
This command was run using /Users/davidlaxer/hadoop-2.6.1/share/hadoop/common/hadoop-common-2.6.1.jar
$ bin/hadoop fs -ls /
2015-09-26 07:44:55,977 WARN [main] util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
-ls: For input string: "false"
Usage: hadoop fs [generic options] -ls [-d] [-h] [-R] [<path> ...]
The issue was in the config file: hdfs-site.xml
I had:
false.
I changed it to:
0
<configuration>
<property>
<name>dfs.replication</name>
<value>0</value>
</property>
</configuration>

Ant: i/o-redirection "must not contain the '<' character"

I want to integrate an mysql-statement in a build.xml file for ant.
The command should be:
mysql -u$user -p$pwd -D$database < app/mysql/geo/data/geo.data.sql
For ant, i defined this macro:
<macrodef name="populateGeoDatabase">
<attribute name="user"/>
<attribute name="password"/>
<attribute name="database"/>
<sequential>
<exec executable="mysql">
<arg line="-u#{user} -p#{password} -D#{database} < app/mysql/geo/data/geo.data.sql" />
</exec>
</sequential>
</macrodef>
At first i tried to use multiple '<arg value>'-lines for each parameter and also for
the i/o-redirection to consume the input-sql-file.
Both did not work with the same error message:
The value of attribute "line" associated with an element type "arg" must not contain the '<' character.
How to achive that "<"-Redirection for ant-exec?
UPDATE
As Bhavin Panchani pointed out, i have to escape the "<" with %lt; due to xml-specific markup:
<exec executable="mysql">
<arg line="-u#{user} -p#{password} -D#{database} < app/mysql/geo/data/geo.data.sql" />
</exec>
But this will also not solve the problem, but result in an running mysql-client stopping with printing all the valid options and variables:
populate-dev-geo-database:
[exec] mysql Ver 14.14 Distrib 5.5.43, for debian-linux-gnu (x86_64) using readline 6.2
[exec] Copyright (c) 2000, 2015, Oracle and/or its affiliates. All rights reserved.
[exec]
[exec] Oracle is a registered trademark of Oracle Corporation and/or its
[exec] affiliates. Other names may be trademarks of their respective
[exec] owners.
[exec]
[exec] Usage: mysql [OPTIONS] [database]
[exec] -?, --help Display this help and exit.
[exec] -I, --help Synonym for -?
[exec] --auto-rehash Enable automatic rehashing. One doesn't need to use
[exec] Variables (--variable-name=value)
[exec] and boolean options {FALSE|TRUE} Value (after reading options)
[exec] --------------------------------- ----------------------------------------
[exec] auto-rehash TRUE
[exec] auto-vertical-output FALSE
The only thing i can solve this issue is to use the option "-e" for executing an sql-statement and use then SOURCE. This works:
<exec executable="mysql">
<arg line="-u#{user} -p#{password} -D#{database} -e 'source app/mysql/geo/data/geo.data.sql'" />
</exec>
However, i am still interested in the solution for using i/o redirections with ant in combination with the mysql-client.
you need to escape < according to XML syntax:
<exec executable="mysql">
<arg line="-u#{user} -p#{password} -D#{database} < app/mysql/geo/data/geo.data.sql" />
</exec>
I/O redirection doesn't work that way - it is something your shell does (at least on Unix). Use a <redirector> in Ant or <exec>'s input attribute.
Ant's site has a couple of FAQs dedicated to this, see http://ant.apache.org/faq#shell-redirect-1