I have 2 pods(1mysql+1idm) on Kubernetes Cluster (1 master+1 worker node on VirtualBox)
Although Keyrock creates the idm database, it cannot be migrated.
So the superuser is never inserted into db and many fields of the tables are missing.
Below are presented the idm's logs from the relative container:
To find out about the concret problem with your installation, I would need more information about config and version of MySql and Keyrock.
To not have that problems, I would recommend using the keyrock helm-chart:
https://github.com/FIWARE/helm-charts/tree/main/charts/keyrock
Its tested to run with the Bitnami-MySql(https://github.com/bitnami/charts/tree/master/bitnami/mysql) Chart out of the box.
Related
I need to access a postgres database from my java code which resides in openshift cluster. I need a way to do so. without initiating port forwarding manually through oc port forward command.
I have tried using openshift java client class openshift connection factory to get the connection by passing server url and username password through which I log in to the console but it dint help.
(This is mostly just a more detailed version of Will Gordon's comment, so credit to him.)
It sounds like you are trying to expose a service (specifically Postgres) outside of your cluster. This is very common.
However the best method to do so does depend a bit on your physical infrastructure because we are by definition trying to integrate with your networking. Look at the docs for Getting Traffic into your Cluster. Routes are probably not what you want, because Postgres is a TCP protocol. But one of the other options in that chapter (Load Balancer, External IP, or NodePort) is probably your best option depending on your networking infrastructure and needs.
I used to test installed mongodb-shard on kubernetes by helm,but I found that those helm charts are not really a qualified mongodb shard.These helm charts can correctly create Pods with names like mongos-1 mongod-server-1 mongod-shard-1,this seems to be a correct shard cluster configuration, but the appropriate mongos, mongod server instance is not created on the corresponding Pod.They just create a normal mongod instance on the corresponding pod, and there is no connection between them.Do I need to add scripts to execute commands similar to rs.addShard(config)?Encountered the same problem when installing mysql cluster using helm.
What I want to know is, is it not appropriate to install mysql/mongodb cluster on kubernetes in general scenarios?Is the database installed independently or deployed on Kubernetes?
Yes, you can deploy MongoDB instances on Kubernetes clusters.
Use standalone instance if you want to test and develop and replica set for production like deployments.
Also to make things easier you can use MongoDB Enterprise Kubernetes Operator:
The Operator enables easy deploys of MongoDB into Kubernetes clusters,
using our management, monitoring and backup platforms, Ops Manager and
Cloud Manager. By installing this integration, you will be able to
deploy MongoDB instances with a single simple command.
This guide has references to the official MongoDB documentation with more necessary details regarding:
Install Kubernetes Operator
Deploy Standalone
Deploy Replica Set
Deploy Sharded Cluster
Edit Deployment
Kubernetes Resource Specification
Troubleshooting Kubernetes Operator
Known Issues for Kubernetes Operator
So basically all you need to know in this topic.
Please let me know if that helped.
I am integrating wirecloud and fiware-idm. Installed both through docker successfully. However, after installing fiware-idm, i am not able to login from admin. username - admin#test.com password - 1234.
Everytime it redirect it to "ip:3000/auth/login". Do I have to make any other configuration in wirecloud or fiware-idm?
Also, even after entering wrong credential, it redirects me to /auth/login and does not display any error message.
My wirecloud, fiware-idm and mysql database are in different containers. Is this can be the issue?
IdM should be deployed on production to be used by WireCloud. That is, you should configure the IDM service using public domains names, using https, and so on... Seems you are creating a local installation, so you should deploy some workarounds. Well, some of those requirements are not enforced by WireCloud, so it should be enough by ensure you use a domain name for accessing the IdM.
You can simulate having the idm server configured using public domains by adding the proper value to /etc/hosts (See this link if you are running windows), the correct value depends on how did you configured the IdM service. So, the idea is to ensure the domain used for accessing the idm resolves to the correct ip address both in the WireCloud container and from your local computer. We can provide you more detailed steps if you provide us more details about how are you launching the different containers.
I have my Sails application on an AWS instance with all dependancies installed with no apparent issues. However, each time I try to launch the app I am getting the following error.
error: AdapterError: Connection is already registered
I have not managed to successfully lift sails yet on the instance and sails-mysql was freshly installed so no connections should be registered.
I have taken the following steps to deploy my app..
Set up a MySql RDS instance (EU-West)
Created and set up an Ubuntu AMD-64 t2.micro EC2 instance (EU-West)
Installed all prerequisites (Git, NVM, NodeJs, Sails, etc.)
Cloned my Sails project
Installed dependencies for Sails
Correctly configured my connection settings for Sails to use my RDS instance.
I know that my connection settings are correct as I have been able to run Sails on my local machine with a connection to my RDS instance and it would consistently lift without any issues.
I am also able to connect to my RDS instance using SequelPro with no problems.
I have had issues with dependencies in the past but have managed to fix those issues and have not had any of them on my local machine or with my EC2 instance.
After searching for a while I have come across a few users who have had similar issues but have managed to fix them with Waterline's teardown methods, however, I am unsure how to achieve this.
I have done my best to provide as much information as possible and any help would be massively appreciated.
Sails Version: 0.12.11
Thank you in advance.
I managed to fix the issue by carrying out the following:
Switched my environment to production in config/bootstrap.js
In connections.js add connectTimeout: 20000 to make sure the request does not time out before the connection is made.
eg. process.env.NODE_ENV = 'development'
Ensure that the security group inbounds rules for the RDS allows connections from the security group associated with my EC2 instance.
Type: MySQL/Aurora
Protocol: TCP
Port Range: 3306
Source: < Your security group ID >
Following the above points also meant I overcame the issue with handshake timeouts when communicating with the RDS.
I've setup our heroku app with an amazon RDS instance.
I followed the guide here:
https://devcenter.heroku.com/articles/amazon_rds
This guide basically says to require SSL with the connection and then to input your RDS credentials.
This doesn't seem very secure to me. If someone has my db url, user and password then they can login from anywhere, correct? The SSL is nice to prevent sniffing of this info, but I'd like to lock it down further, to a machine, IP address or SSH.
I previously setup RDS DB instances where access was locked down to only specific IPs, but heroku no longer recommends this for whatever reason.
So the questions are:
Are my assumptions correct here?
How can I lock this down further?
Why doesn't heroku recommend locking it down to IP (or at least IP range)
I'll run this by heroku support as well and post an update, but wanted to get thoughts from the community.
Previously, Heroku recommended locking down access by referencing the Heroku AWS account ID. That approach is no longer recommended. The Heroku changelog entry lists the reasons, reproducing here for completeness:
Cross-security grants don't work with AWS VPC (which is now the default on AWS)
It's not safe because it grants access to all apps running on Heroku, not just yours
Doesn't work across AWS regions
Heroku may in the future run apps in a VPC or in a different region or use a different AWS account
We know that not all customers are happy with this level of access granularity, and we're continuously evaluating whether this is the optimal setup.