Metabase not connecting to MySQL (docker containers) - mysql

I'm running metabase and MySQL in seperate docker containers, both connecting to the same bridge.
I can ping the MySQL container from the metabase container. However, when I try to connect to MySQL from the metabase interface, I am getting the following error:
"unexpected end of stream, read 0 bytes from 4 (socket was closed by server)".
Here is my config:
Windows 10
Metabase version v0.36.4 (196c1f6 release-0.36.x)
8.0.21 MySQL Community Server - GPL
And my docker network configuration:
[
{
"Name": "mysql-metabase-net",
"Id": "bbe21c1873049a3ce0aee6f2e8b2cd3ba5c443cc655d685368f342b42e9d6e98",
"Created": "2020-09-07T05:18:19.355990708Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.19.0.0/16",
"Gateway": "172.19.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"7c1dfaee4a8783aae6afccbbc1970d3fb971645a972c2484e67125b7aba027bc": {
"Name": "my-container",
"EndpointID": "7ee6b1f4b850f2a9389fa6cda311eb19e28016890eafd7659255d2fab9b7a38b",
"MacAddress": "02:42:ac:13:00:02",
"IPv4Address": "172.19.0.2/16",
"IPv6Address": ""
},
"cc261fe878298ec8199a700351195901a65f3575d395971a6f5268e9a0b9d93f": {
"Name": "metabase",
"EndpointID": "0c3541adc8d42c57a14c7d3f465111c2734718734432df5a61ee676628ef79b2",
"MacAddress": "02:42:ac:13:00:03",
"IPv4Address": "172.19.0.3/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
Am I doing something wrong? Any idea what the issue could be?
Thanks for your help.
Jeremy

So it turned out I had my ports messed up. Further troubleshooting involving public key and SSL was solved here

Related

adding nested OPC-UA Variable results in "String cannot be coerced to a nodeId"

Error: String cannot be coerced to a nodeId
Hi,
I was busy setting up a connection between the Orion Broker and an PLC with OPC-UA Server using the opcua iotagent agent.
I managed to setup all parts and I am able to receive (test) data, but I am unable to follow the tutorial with regards to adding an entity to the Orion-Broker using a json file:
curl http://localhost:4001/iot/devices -H "fiware-service: plcservice" -H "fiware-servicepath: /demo" -H "Content-Type: application/json" -d #add_device.json
The expected result would be an added entity to the OrionBroker with the supplied data, but this only results in a error message:
{"name":"Error","message":"String cannot be coerced to a nodeId : ns*4:s*MAIN.mainVar"}
suspected Error
Is it possible that the iotagent does not work nicely with nested Variables?
steps taken
doublechecked availability of OPC Data:
OPC data changes every second, can be seen in Broker log
reduced complexity of setup to only include Broker and IOT-agent
additional information:
add_device.json file:
{
"devices": [
{
"device_id": "plc1",
"entity_name": "PLC1",
"entity_type": "plc",
"attributes": [
{
"object_id": "ns*4:s*MAIN.mainVar",
"name": "main",
"type": "Number"
}
],
"lazy": [
],
"commands" : []
}
]
}
config of IOT-agent (from localhost:4081/config):
{
"config": {
"logLevel": "DEBUG",
"contextBroker": {
"host": "orion",
"port": 1026
},
"server": {
"port": 4001,
"baseRoot": "/"
},
"deviceRegistry": {
"type": "memory"
},
"mongodb": {
"host": "iotmongo",
"port": "27017",
"db": "iotagent",
"retries": 5,
"retryTime": 5
},
"types": {
"plc": {
"service": "plcservice",
"subservice": "/demo",
"active": [
{
"name": "main",
"type": "Int16"
},
{
"name": "test1",
"type": "Int16"
},
{
"name": "test2",
"type": "Int16"
}
],
"lazy": [],
"commands": []
}
},
"browseServerOptions": null,
"service": "plc",
"subservice": "/demo",
"providerUrl": "http://iotage:4001",
"pollingExpiration": "200000",
"pollingDaemonFrequency": "20000",
"deviceRegistrationDuration": "P1M",
"defaultType": null,
"contexts": [
{
"id": "plc_1",
"type": "plc",
"service": "plcservice",
"subservice": "/demo",
"polling": false,
"mappings": [
{
"ocb_id": "test1",
"opcua_id": "ns=4;s=test.TestVar.test1",
"object_id": null,
"inputArguments": []
},
{
"ocb_id": "test2",
"opcua_id": "ns=4;s=test.TestVar.test2",
"object_id": null,
"inputArguments": []
},
{
"ocb_id": "main",
"opcua_id": "ns=4;s=MAIN.mainVar",
"object_id": null,
"inputArguments": []
}
]
}
]
}
}
I'm one of the maintainers of the iotagent-opcua repo, we have identified and fixed the bug you were addressing, please update your agent to the latest version (1.4.0)
If you haven't ever heard about it, starting from 1.3.8 we have introduced a new configuration property called "relaxTemplateValidation" which let you use previously forbidden characters (e.g. = and ; ). I suggest you to have a look at it on the configuration examples provided.

How to fix failed VMSS deployment with error "unknown network allocation error"

I am trying to deploy a 3-tier architecture to Azure using the Azure PowerShell CLI and a customized ARM template with parameters. I am not having any issues with the powershell script or the template's validity.
Within the template, among other things are two Virtual Machine Scale Sets, one for the front-end and one for the back-end. Front-end is windows and back-end is red hat. The front-end is behind an application gateway while the back-end is behind a load balancer. What's weird is that the front-end VMSS is deploying no problem and all is well. The back-end VMSS fails every time I try to deploy it, with a vague "Unknown network allocation error" message that I have no idea how to debug (since it provides no specifics unlike all of my other error messages so far).
I based the ARM template on an exported template from a working model of this architecture in another resource group and modified the parameters and have spent a while cleaning up issues and errors with Azure's exported template. I have tried deleting and starting from scratch but it doesn't seem to fix this problem. I thought it was possible I reached the limit of free-subscription processors so I tried making the front-end VMSS dependent on the back-end VMSS so the back-end VMSS would be created first, but the same issue still happened.
Here is the back-end VMSS part of the template:
{
"type": "Microsoft.Compute/virtualMachineScaleSets",
"apiVersion": "2018-10-01",
"name": "[parameters('virtualMachineScaleSets_JakeAppBESS_name')]",
"location": "westus2",
"dependsOn": [
"[parameters('loadBalancers_JakeAppBESSlb_name')]"
],
"sku": {
"name": "Standard_B1ls",
"tier": "Standard",
"capacity": 1
},
"properties": {
"singlePlacementGroup": true,
"upgradePolicy": {
"mode": "Manual"
},
"virtualMachineProfile": {
"osProfile": {
"computerNamePrefix": "jakeappbe",
"adminUsername": "Jake",
"adminPassword": "[parameters('JakeApp_Password')]",
"linuxConfiguration": {
"disablePasswordAuthentication": false,
"provisionVMAgent": true
},
"secrets": []
},
"storageProfile": {
"osDisk": {
"createOption": "FromImage",
"caching": "ReadWrite",
"managedDisk": {
"storageAccountType": "Premium_LRS"
}
},
"imageReference": {
"publisher": "RedHat",
"offer": "RHEL",
"sku": "7.4",
"version": "latest"
}
},
"networkProfile": {
"networkInterfaceConfigurations": [
{
"name": "[concat(parameters('virtualMachineScaleSets_JakeAppBESS_name'), 'Nic')]",
"properties": {
"primary": true,
"enableAcceleratedNetworking": false,
"dnsSettings": {
"dnsServers": []
},
"enableIPForwarding": false,
"ipConfigurations": [
{
"name": "[concat(parameters('virtualMachineScaleSets_JakeAppBESS_name'), 'IpConfig')]",
"properties": {
"subnet": {
"id": "[concat('/subscriptions/', parameters('subscription_id'), '/resourceGroups/', parameters('resource_Group'), '/providers/Microsoft.Network/virtualNetworks/', parameters('virtualNetworks_JakeAppVnet_name'), '/subnets/BEsubnet')]"
},
"privateIPAddressVersion": "IPv4",
"loadBalancerBackendAddressPools": [
{
"id": "[concat('/subscriptions/', parameters('subscription_id'), '/resourceGroups/', parameters('resource_Group'), '/providers/Microsoft.Network/loadBalancers/', parameters('loadBalancers_JakeAppBESSlb_name'), '/backendAddressPools/bepool')]"
}
],
"loadBalancerInboundNatPools": [
{
"id": "[concat('/subscriptions/', parameters('subscription_id'), '/resourceGroups/', parameters('resource_Group'), '/providers/Microsoft.Network/loadBalancers/', parameters('loadBalancers_JakeAppBESSlb_name'), '/inboundNatPools/natpool')]"
}
]
}
}
]
}
}
]
},
"priority": "Regular"
},
"overprovision": true
}
},
For reference, here's the front-end VMSS's part of the template so you can compare and see that there aren't many differences:
` {
"type": "Microsoft.Compute/virtualMachineScaleSets",
"apiVersion": "2018-10-01",
"name": "[parameters('virtualMachineScaleSets_JakeAppFESS_name')]",
"location": "westus2",
"dependsOn": [
"[parameters('applicationGateways_JakeAppFE_AG_name')]",
],
"sku": {
"name": "Standard_B1ls",
"tier": "Standard",
"capacity": 1
},
"properties": {
"singlePlacementGroup": true,
"upgradePolicy": {
"mode": "Manual"
},
"virtualMachineProfile": {
"osProfile": {
"computerNamePrefix": "jakeappfe",
"adminUsername": "Jake",
"adminPassword": "[parameters('JakeApp_Password')]",
"windowsConfiguration": {
"provisionVMAgent": true,
"enableAutomaticUpdates": true
},
"secrets": []
},
"storageProfile": {
"osDisk": {
"createOption": "FromImage",
"caching": "ReadWrite",
"managedDisk": {
"storageAccountType": "Premium_LRS"
}
},
"imageReference": {
"publisher": "MicrosoftWindowsServer",
"offer": "WindowsServer",
"sku": "2016-Datacenter",
"version": "latest"
}
},
"networkProfile": {
"networkInterfaceConfigurations": [
{
"name": "[concat(parameters('virtualMachineScaleSets_JakeAppFESS_name'), 'Nic')]",
"properties": {
"primary": true,
"enableAcceleratedNetworking": false,
"dnsSettings": {
"dnsServers": []
},
"enableIPForwarding": false,
"ipConfigurations": [
{
"name": "[concat(parameters('virtualMachineScaleSets_JakeAppFESS_name'), 'IpConfig')]",
"properties": {
"subnet": {
"id": "[concat('/subscriptions/', parameters('subscription_id'), '/resourceGroups/', parameters('resource_Group'), '/providers/Microsoft.Network/virtualNetworks/', parameters('virtualNetworks_JakeAppVnet_name'), '/subnets/FEsubnet')]"
},
"privateIPAddressVersion": "IPv4",
"applicationGatewayBackendAddressPools": [
{
"id": "[concat('/subscriptions/', parameters('subscription_id'), '/resourceGroups/', parameters('resource_Group'), '/providers/Microsoft.Network/applicationGateways/', parameters('applicationGateways_JakeAppFE_AG_name'), '/backendAddressPools/appGatewayBackendPool')]"
}
]
}
}
]
}
}
]
},
"priority": "Regular"
},
"overprovision": true
}
},
I expected them to both behave similarly. Granted, back-end is RH linux while front-end is windows, and the front-end is behind an application gateway while the back-end is behind a load balancer, but this setup is working perfectly fine in my other resource group that was deployed through the portal instead of through ARM. But every time I try to deploy this I get this error:
New-AzureRmResourceGroupDeployment : 1:30:56 AM - Resource Microsoft.Compute/virtualMachineScaleSets 'ProdBESS' failed with message '{
"status": "Failed",
"error": {
"code": "ResourceDeploymentFailure",
"message": "The resource operation completed with terminal provisioning state 'Failed'.",
"details": [
{
"code": "NetworkingInternalOperationError",
"message": "Unknown network allocation error."
}
]
}
}'
Okay I finally figured out what the issue was, so if anyone searching finds this thread in the future having the same error:
Apparently the part of the template dealing with the load balancer for the VMSS (which was exported from azure portal) had two conflicting inbound nat pools (overlapping port ranges). Once I deleted the part of the template creating the conflicting extra nat pool my VMSS deployed properly without issue.
No idea at all why the azure portal exported me a template with an extra nat pool that had never existed (there was only 1 on the original LB I exported the template from).

How to config the marathon-lb when the container run in HOST network?

My json for marathon is below
{
"id": "/storage/mysql",
"cmd": null,
"cpus": 1,
"mem": 512,
"disk": 0,
"instances": 1,
"constraints": [
[
"hostname",
"UNIQUE"
]
],
"container": {
"type": "DOCKER",
"volumes": [],
"docker": {
"image": "reg.xxxxx.cn/library/mysql:5.7",
"network": "HOST",
"portMappings": [],
"privileged": true,
"parameters": [],
"forcePullImage": false
}
},
"env": {
"MYSQL_ROOT_PASSWORD": "123456"
},
"labels": {
"HAPROXY_GROUP": "internal"
},
"portDefinitions": [
{
"port": 3306,
"protocol": "tcp",
"labels": {}
}
]
}
I find the haproxy (run at 192.168.30.142 View the screenshot) config is :
frontend storage_mysql_3306
bind *:3306
mode tcp
use_backend storage_mysql_3306
backend storage_mysql_3306
balance roundrobin
mode tcp
server 192_168_30_144_31695 192.168.30.144:31695
the mysql container is run at 192.168.30.144 View the screenshot, so what i want is :
server 192_168_30_144_3306 192.168.30.144:3306
so what should i do to slove it?
thanks!
I think I found the answer
http://mesosphere.github.io/marathon/docs/host-port.html
Mesos agents do NOT offer all ports
Am I right?

How to convert docker run command into json file?

I was wondering if anyone knows how to create a json file that would be the same as running:
docker run -p 80:80 -p 443:443 starblade/pydio-v4
I trying something very ambitious, I want to start my docker container in kubernetes-mesos cluster but can't seem to get the ports correct in the json file, alas I am still very new to this.
Thanks,
TT
Here are my json files:
`
{
"id": "frontend-controller",
"kind": "ReplicationController",
"apiVersion": "v1beta1",
"desiredState": {
"replicas": 3,
"replicaSelector": {"name": "frontend"},
"podTemplate": {
"desiredState": {
"manifest": {
"version": "v1beta1",
"id": "frontend-controller",
"containers": [{
"name": "pydio-v4",
"image": "starblade/pydio-v4",
"ports": [{"containerPort": 10001, "protocol": "TCP"}]
}]
}
},
"labels": {"name": "frontend"}
}},
"labels": {"name": "frontend"}
}
{
"id": "frontend",
"kind": "Service",
"apiVersion": "v1beta1",
"port": 80,
"port": 443,
"targetPort": 10001,
"selector": {
"name": "frontend"
},
"publicIPs": [
"${servicehost}"
]
}
Docker container Env info pulled from docker inspect command:
"Env": [
"FRONTEND_SERVICE_HOST=10.10.10.14",
"FRONTEND_SERVICE_PORT=443",
"FRONTEND_PORT=tcp://10.10.10.14:443",
"FRONTEND_PORT_443_TCP=tcp://10.10.10.14:443",
"FRONTEND_PORT_443_TCP_PROTO=tcp",
"FRONTEND_PORT_443_TCP_PORT=443",
"FRONTEND_PORT_443_TCP_ADDR=10.10.10.14",
"KUBERNETES_SERVICE_HOST=10.10.10.2",
"KUBERNETES_SERVICE_PORT=443",
"KUBERNETES_PORT=tcp://10.10.10.2:443",
"KUBERNETES_PORT_443_TCP=tcp://10.10.10.2:443",
"KUBERNETES_PORT_443_TCP_PROTO=tcp",
"KUBERNETES_PORT_443_TCP_PORT=443",
"KUBERNETES_PORT_443_TCP_ADDR=10.10.10.2",
"KUBERNETES_RO_SERVICE_HOST=10.10.10.1",
"KUBERNETES_RO_SERVICE_PORT=80",
"KUBERNETES_RO_PORT=tcp://10.10.10.1:80",
"KUBERNETES_RO_PORT_80_TCP=tcp://10.10.10.1:80",
"KUBERNETES_RO_PORT_80_TCP_PROTO=tcp",
"KUBERNETES_RO_PORT_80_TCP_PORT=80",
"KUBERNETES_RO_PORT_80_TCP_ADDR=10.10.10.1",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"PYDIO_VERSION=6.0.5"
],
"ExposedPorts": {
"443/tcp": {},
"80/tcp": {}
},
`
The pod and service both start and run ok.
However I am unable to access the running Pydio site on any of the master, minion or frontend ips.
Note:
I am running a modified version of the this docker container:
https://registry.hub.docker.com/u/kdelfour/pydio-docker/
My container has been tested and it runs as expected.
You should see the login screen once it is running.
Please let me know if I can provide any other information.
Thanks again.
So, I finally got this to work using the following .json files:
frontend-service.json
{
"id": "frontend",
"kind": "Service",
"apiVersion": "v1beta1",
"port": 443,
"selector": {
"name": "frontend"
},
"publicIPs": [
"${servicehost}"
]
}
frontend-controller.json
{
"id": "frontend-controller",
"kind": "ReplicationController",
"apiVersion": "v1beta1",
"desiredState": {
"replicas": 1,
"replicaSelector": {"name": "frontend"},
"podTemplate": {
"desiredState": {
"manifest": {
"version": "v1beta1",
"id": "frontend-controller",
"containers": [{
"name": "pydio-v4",
"image": "starblade/pydio-v4",
"ports": [{"containerPort": 443, "hostPort": 31000}]
}]
}
},
"labels": {"name": "frontend"}
}},
"labels": {"name": "frontend"}
}
I now have pydio with SSL running in a Mesos-Kubernetes env on GCE.
Going to run some tests using more hostPorts to see if I can get more than one replica running on one host. At this point I can resize up to 3.
Hope this helps someone.
Thanks,
TT

OrientDB Transformer 'jdbc' not found

I've recently installed OrientDB and trying to create an import using the ETL module.
Running on OS X, i've installed orientDB using homebrew.
I've created the following ETL script:
{
"config": {
"log": "debug"
},
"begin": [
],
"extractor" : {
"row": {}
},
"transformers": [
{ "jdbc": {
"driver": "com.mysql.jdbc.Driver",
"url": "jdbc:mysql://localhost/dev_database",
"userName": "root",
"userPassword": "",
"query": "select * from users limit 20"
}
},
{ "vertex": { "class": "V" } }
],
"loader": {
"orientdb": {
"dbURL": "plocal:../databases/ETLDemo",
"dbUser": "admin",
"dbPassword": "admin",
"dbAutoCreate": true,
"tx": false,
"batchCommit": 1000,
"dbType": "graph"
}
}
}
Followed the instructions here: http://www.orientechnologies.com/docs/2.0/orientdb-etl.wiki/Import-from-DBMS.html
and installed the jdbc driver for mysql from here: http://dev.mysql.com/downloads/connector/j/
and set the classpath as described.
Running the command:
./oetl.sh ../import_mysql.json
Gives the following output:
OrientDB etl v.2.0.2 (build #BUILD#) www.orientechnologies.com
Exception in thread "main" com.orientechnologies.orient.core.exception.OConfigurationException: Error on creating ETL processor
at com.orientechnologies.orient.etl.OETLProcessor.parse(OETLProcessor.java:278)
at com.orientechnologies.orient.etl.OETLProcessor.parse(OETLProcessor.java:188)
at com.orientechnologies.orient.etl.OETLProcessor.main(OETLProcessor.java:163)
Caused by: java.lang.IllegalArgumentException: Transformer 'jdbc' not found
at com.orientechnologies.orient.etl.OETLComponentFactory.getTransformer(OETLComponentFactory.java:141)
at com.orientechnologies.orient.etl.OETLProcessor.parse(OETLProcessor.java:260)
... 2 more
I did manage to create a working import using a CSV file so i'm pretty sure that the database is set up correctly.
Thoughts?
{
"config": {
"log": "debug"
},
"extractor": {
"jdbc": {
"driver": "com.mysql.jdbc.Driver",
"url": "jdbc:mysql://localhost/dev_database",
"userName": "root",
"userPassword": "",
"query": "select * from users limit 20"
}
},
"transformers" : [
{ "vertex": { "class": "V"} }
],
"loader": {
"orientdb": {
"dbURL": "plocal:../databases/ETLDemo",
"dbUser": "admin",
"dbPassword": "admin",
"dbAutoCreate": true,
"tx": false,
"batchCommit": 1000,
"dbType": "graph"
}
}
}
Can you see if this solves the problem?