My goal is to add the capability for geospatial queries to my jhipster-generated Spring Boot + MySql project, but I have failed to properly configure my H2 database for queries performed by my tests and by my dev database for local deployments of the app. Since we have a strict CI/CD pipeline, this means I have not been able to test in prod yet, but I suspect I'd run into the same error there too. The error I get when performing a spatial query in a test or dev environment: org.h2.jdbc.JdbcSQLSyntaxErrorException: Function "WITHIN" not found;.
There are a number of posts and guides addressing this issue, but they have not resolved the problem for me. I have followed the tutorial here, the helpful documentation here, and have tried the solutions/suggestions in post 1, post 2, post 3, post 4, and several others. I also compared my code to this example project. But I am still unable to get past this error.
Relevant config...
pom.xml:
...
<java.version>1.8</java.version>
<spring-boot.version>2.1.6.RELEASE</spring-boot.version>
<spring.version>5.1.8.RELEASE</spring.version>
<hibernate.version>5.3.10.Final</hibernate.version>
<h2.version>1.4.199</h2.version>
<jts.version>1.13</jts.version>
...
<repositories>
<repository>
<id>OSGEO GeoTools repo</id>
<url>http://download.osgeo.org/webdav/geotools</url>
</repository>
<repository>
<id>Hibernate Spatial repo</id>
<url>http://www.hibernatespatial.org/repository</url>
</repository>
</repositories>
...
<dependencies>
<dependency>
<groupId>com.h2database</groupId>
<artifactId>h2</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
</dependency>
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-spatial</artifactId>
</dependency>
<dependency>
<groupId>com.vividsolutions</groupId>
<artifactId>jts</artifactId>
<version>${jts.version}</version>
</dependency>
</dependencies>
My main application.yml:
spring:
jpa:
open-in-view: false
properties:
hibernate.jdbc.time_zone: UTC
hibernate:
dialect: org.hibernate.spatial.dialect.mysql.MySQL56SpatialDialect
ddl-auto: none
My application-dev.yml for my dev environment:
spring:
h2:
console:
enabled: false
jpa:
database-platform: org.hibernate.spatial.dialect.h2geodb.GeoDBDialect
database: H2
show-sql: true
hibernate:
dialect: org.hibernate.spatial.dialect.h2geodb.GeoDBDialect
My application-prod.yml for prod:
spring:
jpa:
database-platform: org.hibernate.spatial.dialect.mysql.MySQL56SpatialDialect
database: MYSQL
show-sql: false
My test/application.yml:
spring:
jpa:
database-platform: org.hibernate.spatial.dialect.h2geodb.GeoDBDialect
database: H2
open-in-view: false
show-sql: false
hibernate:
dialect: org.hibernate.spatial.dialect.h2geodb.GeoDBDialect
ddl-auto: none
Relevant code in service layer:
#Override
#Transactional(readOnly = true)
public Page<MyObject> findAllWithinDistanceOfLocation(Float distance, Point location, Pageable pageable) {
log.debug("Request to get all MyObject within a distance centered on location");
GeometricShapeFactory shapeFactory = new GeometricShapeFactory();
shapeFactory.setNumPoints(32); // 32 = number of points to define circle. Default is 100. Higher the number, the more accurately drawn the circle
shapeFactory.setCentre(location.getCoordinate());
shapeFactory.setSize(distance * 2);
Geometry areaOfInterest = shapeFactory.createCircle();
return myObjectRepository.findAllWithinCircle(areaOfInterest, pageable);
}
Relevant code in repository:
#Query("select e from MyObjectTable e where within(e.location, :areaOfInterest) = true")
Page<MyObject> findAllWithinCircle(#Param("areaOfInterest") Geometry areaOfInterest, Pageable pageable);
Relevant code in database config bean:
/**
* Open the TCP port for the H2 database, so it is available remotely.
*
* #return the H2 database TCP server.
* #throws SQLException if the server failed to start.
*/
#Bean(initMethod = "start", destroyMethod = "stop")
#Profile(JHipsterConstants.SPRING_PROFILE_DEVELOPMENT)
public Object h2TCPServer() throws SQLException {
String port = getValidPortForH2();
log.debug("H2 database is available on port {}", port);
return H2ConfigurationHelper.createServer(port);
}
private String getValidPortForH2() {
int port = Integer.parseInt(env.getProperty("server.port"));
if (port < 10000) {
port = 10000 + port;
} else {
if (port < 63536) {
port = port + 2000;
} else {
port = port - 2000;
}
}
return String.valueOf(port);
}
I've tried different values for the properties above, trying to do so in a principled way based on documentation and other projects, but I can't seem to get this working properly. I suspect I am missing an h2 initial configuration command that creates an alias for WITHIN but still have not been able to grok it and get this working.
Note: I've included and excluded the pom file's above section to no effect.
I went through this path for spatial Postgresql and then it was painful: CI did not catch bugs until we decided to give up H2.
I would recommend that you use same database in dev and prod using docker and testcontainers, JHipster supports this but it's easy to do by yourself too.
For those who want to know how we resolved this...
The problem: We had a Heroku CI/CD pipeline that did not support test containers, as stated here: https://devcenter.heroku.com/articles/heroku-ci#docker-deploys
To quote the documentation: "Currently, it is not possible to use Heroku CI to test container builds."
Compounding this problem was that H2 support for spatial queries was too problematic and gave different results than a native MySql db and posed a myriad of dialect-related problems outlined in the original post.
The not-ideal but workable solution: Was a combination of a development process "workaround" combined with some standard testing practices.
First, we created a test-containers profile that would run geospatial integration tests when ./mvnw verify was executed with that test-containers profile. The Heroku CI/CD pipeline did not run the geospatial integration tests, but we made it part of our "definition of done" to run those tests locally.
To make this less bad and error-prone, we did the typical unit testing strategy: mock the repositories that employ geospatial queries and exercise business logic in the unit tests. These ran in the CI/CD pipeline.
The next step will be to migrate the CI/CD pipeline to one that supports containers. But in the meantime, the above approach gave us enough overlapping coverage to feel confident to promote the geospatial-based features to prod. After several months of being stress tested with feature enhancements and extensions, so far things seem to have worked well from a product point-of-view.
Related
I'm using thorntail microprofile framework to monitor a simple rest service application. The application deployment on openshift works fine but not the health monitor since receive this message:
Readiness probe failed: Get http://10.116.0.57:8080/health/live: dial tcp 10.116.0.57:8080: connect: connection refused
But can access to the health service using the service route url e.g. http://thorntail-myproject.apps-crc.testing/health/live and get the results:
{"status":"UP","checks":[{"name":"server-state","status":"UP"}]}
Both Liveness and Readiness annotations are included in the HealthCheck implementation class. Also get service response when execute curl through pod's remote container shell.
These are the dependencies I'm using in pom.xml:
<dependencies>
<dependency>
<groupId>io.thorntail</groupId>
<artifactId>jaxrs</artifactId>
</dependency>
<dependency>
<groupId>io.thorntail</groupId>
<artifactId>microprofile-health</artifactId>
</dependency>
</dependencies>
Any ideas?
The problem could be caused by many things, but here are some things you can try:
Verify that the Service object for your deployment / deploymentConfig is connecting to the correct Pods and to the correct Ports.
Verify that the Route/Ingress objects are connecting to the correct Service object.
The two things above seem correct as you can access the Route URL, but we don't know your deployments and how many you have.
Verify that your Liveness and Readiness probes are hitting the correct: Ports, page (might be a typo somewhere), protocol - are you using HTTP or HTTPS?
If all of the above is correct, check if you have additional NetworkPolicies for your namespace.
I've been spending the last week or so attempting to learn docker and all the things it can do, however one thing I'm struggling to get my head around is the best practice on how to manage secrets, especially around database connection strings and how these should be stored.
I have a plan in my head where I want to have a docker image, which will contain an ASP.NET Core website, MySQL database and a PHPMyAdmin frontend, and deploy this onto a droplet I have at DigitalOcean.
I've been playing around a little bit and I have a docker-compose.yml file which has the MySQL DB and PhpMyAdmin correctly linked together
version: "3"
services:
db:
image: mysql:latest
container_name: mysqlDatabase
environment:
- MYSQL_ROOT_PASSWORD=0001
- MYSQL_DATABASE=atestdb
restart: always
volumes:
- /var/lib/mysql
phpmyadmin:
image: phpmyadmin/phpmyadmin
container_name: db-mgr
ports:
- "3001:80"
environment:
- PMA_HOST=db
restart: always
depends_on:
- db
This is correctly creating a MySQL DB for me and I can connect to it with the running PHPMyAdmin front end using root / 0001 as the username/password combo.
I know I would now need to add my AspNetCore web app to this, but I'm still stumped by the best way to have my DB password.
I have looked at docker swarm/secrets, but I still don't fully understand how this works, especially if I want to check my docker-compose file into GIT/SCM. Other things I have read have suggested using environment variables, but I still don't seem to understand how that is any different to just checking in the connection string in my appsettings.json file, or for that matter, how this would work in a full CI/CD build pipeline.
This question helped my out a little getting to this point, but they still have their DB password in their docker-compose file.
It might be that I'm trying to overthink this
Any help, guidance or suggestions would be gratefully received.
If you are using Docker Swarm then you can take advantage of the secrets feature and store all your sensitive information like passwords or even the whole connection string as docker secret.
For each secret that is created Docker will mount a file inside the container. By default it will mount all the secrets in /run/secrets folder.
You can create a custom configuration provider to read the secret and map it as configuration value
public class SwarmSecretsConfigurationProvider : ConfigurationProvider
{
private readonly IEnumerable<SwarmSecretsPath> _secretsPaths;
public SwarmSecretsConfigurationProvider(
IEnumerable<SwarmSecretsPath> secretsPaths)
{
_secretsPaths = secretsPaths;
}
public override void Load()
{
var data = new Dictionary<string, string>
(StringComparer.OrdinalIgnoreCase);
foreach (var secretsPath in _secretsPaths)
{
if (!Directory.Exists(secretsPath.Path) && !secretsPath.Optional)
{
throw new FileNotFoundException(secretsPath.Path);
}
foreach (var filePath in Directory.GetFiles(secretsPath.Path))
{
var configurationKey = Path.GetFileName(filePath);
if (secretsPath.KeyDelimiter != ":")
{
configurationKey = configurationKey
.Replace(secretsPath.KeyDelimiter, ":");
}
var configurationValue = File.ReadAllText(filePath);
data.Add(configurationKey, configurationValue);
}
}
Data = data;
}
}
then you must add the custom provider to the application configuration
public static IHostBuilder CreateHostBuilder(string[] args)
{
return Host.CreateDefaultBuilder(args)
.ConfigureAppConfiguration((hostingContext, config) =>
{
config.AddSwarmSecrets();
})
.ConfigureWebHostDefaults(webBuilder =>
{
webBuilder.UseStartup<Startup>();
});
}
then if you create a secret with name "my_connection_secret"
$ echo "Server=myServerAddress;Database=myDataBase;Uid=myUsername;Pwd=myPassword;" \
| docker secret create my_connection_secret -
and map it to your service as connectionstrings:DatabaseConnection
services:
app:
secrets:
- target: ConnectionStrings:DatabaseConnection
source: my_connection_secret
it will be the same as writing it to the appsettings.config
{
"ConnectionStrings": {
"DatabaseConnection": "Server=myServerAddress;Database=myDataBase;Uid=myUsername;Pwd=myPassword;"
}
}
If you don't want to store all the connection string as secret then you can use a placeholder for the password
Server=myServerAddress;Database=myDataBase;Uid=myUsername;Pwd={{pwd}};
and use another custom configuration provider to replace it with the password stored as secret.
On my blog post How to manage passwords in ASP.NET Core configuration files I explain in detail how to create a custom configuration provider that allows you to keep only the password as a secret and update the configuration string at runtime. Also the the full source code of this article is hosted on github.com/gabihodoroaga/blog-app-secrets.
Secrets are complicated. I will say that pulling them out into environment variables kicks the problem down the road a bit, especially when you are only using docker-compose (and not something fancier like kubernetes or swarm). Your docker-compose.yaml file would look something like this:
environment:
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
Compose will pull MYSQL_ROOT_PASSWORD from an .env file or a command line/environment variable when you spin up your services. Most CI/CD services provide ways (either through a GUI or through some command line interface) of encrypting secrets that get mapped to environment variables on the CI server.
Not to say that environment variables are necessarily the best way of handling secrets. But if you do move to an orchestration platform, like kubernetes, there will be a straightforward path to mapping kubernetes secrets to those same environment variables.
I want to read data from MariaDB on Google Compute Engine and to write data to BigQuery by DataFlow but I always get the exception as below when I run the DataFlow program on DataFlowRunner.
java.lang.RuntimeException: org.apache.beam.sdk.util.UserCodeException: java.sql.SQLException: Cannot create PoolableConnectionFactory (Could not connect to address=(host=xxx.xxx.xxx.xxx)(port=3306)(type=master) : connect timed out)
I can access successfully the MariaDB by DBeaver.
I can run successfully the DataFlow program on DirectRunner.
Can give me some ideals, Thanks.
To restrict it so that only Dataflow jobs can access it you can leverage the fact that Dataflow's harness VMs are created with the dataflow tag. Otherwise, you can allocate the GCE instance and DF workers on a specific network/subnetwork.
For example, create a GCE instance with a network tag such as mariadb so it can be used as the target to apply firewall rules to and/or select a specific VPC network/subnetwork. Install MariaDB (another option is to use an initialization script or a preinstalled solution through Cloud Launcher).
For the firewall rules, you'll need the Database to be reachable on port tcp:3306. For the GCE instance (target tag mariadb) you'll need to allow ingress traffic from either source tags dataflow or coming from within the subnetwork on the aforementioned port. Take into account that, for the latter option, you'll need to also allow internal communication between DF workers inside the subnetwork.
Now, on the Dataflow side, add the JdbcIO and mariadb connector dependencies to the pom.xmlfile:
<!-- https://mvnrepository.com/artifact/org.apache.beam/beam-sdks-java-io-jdbc -->
<dependency>
<groupId>org.apache.beam</groupId>
<artifactId>beam-sdks-java-io-jdbc</artifactId>
<version>2.3.0</version>
</dependency>
<!-- https://mvnrepository.com/artifact/org.mariadb.jdbc/mariadb-java-client -->
<dependency>
<groupId>org.mariadb.jdbc</groupId>
<artifactId>mariadb-java-client</artifactId>
<version>1.1.7</version>
</dependency>
A sample Dataflow snippet to connect (use internal IP in the JDBC connection string if using the subnetwork approach):
public class MariaDB {
public static void main(String[] args) {
PipelineOptions options = PipelineOptionsFactory.fromArgs(args).withValidation().as(PipelineOptions.class);
Pipeline p = Pipeline.create(options);
PCollection<String> account = p.apply(JdbcIO.<String>read()
.withDataSourceConfiguration(JdbcIO.DataSourceConfiguration.create("org.mariadb.jdbc.Driver", "jdbc:mariadb://INTERNAL_IP:3306/database").withUsername("root").withPassword("pwd"))
.withQuery("SELECT … FROM table")
.withCoder(SerializableCoder.of(String.class))
.withRowMapper(new JdbcIO.RowMapper<String>() {
public String mapRow(ResultSet rs) throws Exception {
...
}}
));
p.run();
}
}
And launch the job specifying the subnetwork and matching zone if needed:
mvn compile exec:java \
-Dexec.mainClass=com.example.MariaDB \
-Dexec.args="--project=PROJECT_ID \
--stagingLocation=gs://BUCKET_NAME/mariadb/staging/ \
--output=gs://BUCKET_NAME/mariadb/output \
--network="dataflow-network" \
--subnetwork="regions/europe-west1/subnetworks/subnet-europe-west" \
--zone="europe-west1" \
--runner=DataflowRunner"
How to setup environment variable for symfony.
Like if i run my project than it should detetched the envirment and do the action, as an example ---
http: //production.com -> prod * environment *
http: //localhost:9200 -> * dev * environment --- for elasticsearch
http: //localhost:8000 -> * dev * environment --- for doctrine/mysql
So if i run a mysql request on localhost it should make the request at
http: //localhost:8000
and if i make a request for elasticsearch it should make the request at
http: //localhost:9200
and if it runs in the production environment it should do the request at
http: //production.com:9200 --- elasticsearch
http: //production.com:8000 --- doctrine/mysql
I think it can be done at parameters.yml but i really did not get how it can be done.
Can someone help me to solve this problem.
Thanks a lot in advanced .
I'm not exactly sure what's the problem here so I'll give you a more general answer.
Symfony has a really great way to configure your project for different situations (or environments). You should have a look at the official documentation which explains things in depth.
By default, Symfony comes with 3 configurations for different environments:
app/config/config_dev.yml for development
app/config/config_prod.yml for production
app/config/config_test.yml for (unit) testing
Each of these config files can override settings from the base configuration file which is app/config/config.yml. You would store your general/common settings there. Whenever you need to override something for a specific environment, you just go to the environment config and change it.
Lets say you have the following base configuration in app/config/config.yml:
# Doctrine Configuration
doctrine:
dbal:
driver: pdo_mysql
host: "%prod_database_host%"
port: "%prod_database_port%"
dbname: "%prod_database_name%"
user: "%prod_database_user%"
password: "%prod_database_password%"
charset: UTF8
Now lets say, you have 3 different databases for each environment - prod, dev and test. The way to do this is to override the configuration in the environment configuration file (lets say app/config/config_dev.yml:
# Doctrine Configuration
doctrine:
dbal:
driver: pdo_mysql
host: "%dev_database_host%"
port: "%dev_database_port%"
dbname: "%dev_database_name%"
user: "%dev_database_user%"
password: "%dev_database_password%"
charset: UTF8
Add the necessary %dev_*% parameters to your app/config/parameters.yml.dist and app/config/parameters.yml. Now, whenever you open your application using the dev environment, it will connect to the specified database in your parameters (%dev_database...%).
This is pretty much it. You can do the same for any configuration you need to be changed in a specific environment. You should definitely have a look at the documentation. It's explained straight-forward with examples.
I have installed Sonar 3.5.1 on a brand new PostgreSQL 9.2 database. The server seems to run fine, but sonar-runner (v2.2) fails with the following error:
Caused by: org.sonar.core.persistence.BadDatabaseVersion: The current batch process and the configured remote server do not share the same DB configuration.
- Batch side: jdbc:postgresql://10.1.0.210/sonar (postgres / *****)
- Server side: check the configuration at http://sonar.kopitoto/system
I am pretty confident that there is no other concurrent installation of Sonar pointing to the same database, because:
This is the first Sonar installation in this organization, ever
The value of sonar.core.id in the DB matches the value returned by the Sonar server:
Getting the value from the DB:
sonar=# SELECT text_value FROM properties WHERE prop_key = 'sonar.core.id';
text_value
----------------
20130525192736
(1 row)
Getting the value from the server:
$ curl http://sonar.kopitoto/api/server
<?xml version="1.0" encoding="UTF-8"?>
<server>
<id>20130525192736</id>
<version>3.5.1</version>
<status>UP</status>
</server>
Sonar-runner's properties:
sonar.host.url: http://sonar.kopitoto
sonar.jdbc.driverClassName: org.postgresql.Driver
sonar.jdbc.password: *****
sonar.jdbc.schema: public
sonar.jdbc.url: jdbc:postgresql://10.1.0.210/sonar
sonar.jdbc.username: postgres
Of course, the password is not five stars, but I checked it twice. If I change it a little bit, the runner fails earlier with authentication error. So a password mismatch is ruled out.
Server's sonar.properties:
sonar.jdbc.username: postgres
sonar.jdbc.password: *****
sonar.jdbc.url: jdbc:postgresql://10.1.0.210/sonar
sonar.jdbc.driverClassName: org.postgresql.Driver
sonar.jdbc.schema: public
Again, the password above is not five stars, but I am pretty sure it is correct. The server logs say nothing about errors, and shows how the database schema is initialized when I stop the thing, drop the database, create an empty one, and then start the Sonar server again.
Am I missing something?
At this point, I am thinking that this is a bug in Sonar (probably in sonar-runner). Unfortunately, Sonar's issue-tracking system is littered with such reports, all closed with "Not a bug" resolution. I guess I will be dismissed similarly if I reopen one of those issues.
So I hope I am really missing something here.