ESPHOME not working for MHz19 with Lolin D1 mini - home-assistant

I connected the MHz19 sensor to the D1 mini and want to flash it via ESP-Home.
I followed the following guide:
https://esphome.io/components/sensor/mhz19.html
I used the code
esphome:
name: co2-sensor
esp8266:
board: esp01_1m
# Enable logging
#logger:
# Enable Home Assistant API
api:
ota:
password: "xxxxxx"
wifi:
ssid: !secret wifi_ssid
password: !secret wifi_password
# Enable fallback hotspot (captive portal) in case wifi connection fails
ap:
ssid: "Co2-Sensor Fallback Hotspot"
password: "xxxxx"
captive_portal:
uart:
rx_pin: GPIO3
tx_pin: GPIO1
baud_rate: 9600
sensor:
- platform: mhz19
co2:
name: "CO2 Value"
temperature:
name: "MH-Z19 Temperature"
update_interval: 60s
automatic_baseline_calibration: false
But cannot flash, I get the following error
======================== [SUCCESS] Took 305.85 seconds ========================
INFO Successfully compiled program.
esptool.py v3.2
Serial port /dev/ttyUSB0
Connecting......................................
A fatal error occurred: Failed to connect to ESP8266: No serial data received.
For troubleshooting steps visit: https://github.com/espressif/esptool#troubleshooting
INFO Upload with baud rate 460800 failed. Trying again with baud rate 115200.
esptool.py v3.2
Serial port /dev/ttyUSB0
Connecting......................................
A fatal error occurred: Failed to connect to ESP8266: No serial data received.
For troubleshooting steps visit: https://github.com/espressif/esptool#troubleshooting
I can however flash it and it comes online if I disconnect the sensor, of course it publishes no data. So I assume it's something to do with UART. I also tried disabling the logging, which did nothing.

It seems the UART pins used by default for debug logging, even though I thought I had disabled the logging option. I used pin 4 and 5 and it worked.
https://esphome.io/components/uart.html#uart
Note that the value I got was 5000 ppm at first, when it was plugged into the Pi on which I'm running HA. When connecting to another PSU I got 'normal' looking values. I assume it simply did not get enough power from the PI.

Related

Amazon Athena MySQL connector: "Access denied for user '[USERNAME]'#'[IP]' (using password: YES)"

I have a MySQL database instance in AWS RDS. I'd like to access it using AWS Athena. I used the "Amazon Athena Lambda MySQL Connector" to set up the new Data Source:
https://github.com/awslabs/aws-athena-query-federation/tree/master/athena-mysql
I installed this using the Serverless Application Repository. Here's the application:
https://serverlessrepo.aws.amazon.com/applications/us-east-1/292517598671/AthenaMySQLConnector
In the application settings before deploying, I used the same SecurityGroupIDs and SubnetIds as I used in another lambda function that is able to query the same database just fine.
In the environment variables for the lambda, I have the connection string set under both the key default and rds_mysql_connection_string (rds_mysql is the name of the Data Source in Athena). The connection string is in the format:
mysql://jdbc:mysql://HOSTNAME.us-east-1.rds.amazonaws.com:3306/DBNAME?user=USERNAME&password=PASSWORD
When I try to switch to the new data source in the Athena query editor, I get this error:
Access denied for user '[USERNAME]'#'[IP]' (using password: YES)
I diff'ed the role for the lambda that can connect against the one for the connector and they're pretty much the same. I even tried giving the connector the exact same role for a minute but it didn't help.
Using the test button on the lambda function for the connector also throws an error, but this could be a red herring. I use a blank test event and I get this in the logs:
WARNING: sun.reflect.Reflection.getCallerClass is not supported. This will impact performance.
Transforming org/apache/logging/log4j/core/lookup/JndiLookup (lambdainternal.CustomerClassLoader#1a6c5a9e)
START RequestId: 98cdd329-b0e5-49c4-8c9e-ae7b6f51dc2a Version: $LATEST
2022-10-20 03:38:51 98cdd329-b0e5-49c4-8c9e-ae7b6f51dc2a INFO BaseAllocator:58 - Debug mode disabled.
2022-10-20 03:38:51 98cdd329-b0e5-49c4-8c9e-ae7b6f51dc2a INFO DefaultAllocationManagerOption:97 - allocation manager type not specified, using netty as the default type
2022-10-20 03:38:51 98cdd329-b0e5-49c4-8c9e-ae7b6f51dc2a INFO CheckAllocator:73 - Using DefaultAllocationManager at memory/DefaultAllocationManagerFactory.class
2022-10-20 03:38:51 98cdd329-b0e5-49c4-8c9e-ae7b6f51dc2a WARN CompositeHandler:107 - handleRequest: Completed with an exception.
java.lang.IllegalStateException: Expected field name token but got END_OBJECT
at com.amazonaws.athena.connector.lambda.serde.BaseDeserializer.assertFieldName(BaseDeserializer.java:221) ~[task/:?]
at com.amazonaws.athena.connector.lambda.serde.BaseDeserializer.getType(BaseDeserializer.java:295) ~[task/:?]
at com.amazonaws.athena.connector.lambda.serde.DelegatingDeserializer.doDeserialize(DelegatingDeserializer.java:56) ~[task/:?]
at com.amazonaws.athena.connector.lambda.serde.DelegatingDeserializer.deserializeWithType(DelegatingDeserializer.java:49) ~[task/:?]
at com.fasterxml.jackson.databind.deser.impl.TypeWrappedDeserializer.deserialize(TypeWrappedDeserializer.java:74) ~[task/:?]
at com.fasterxml.jackson.databind.deser.DefaultDeserializationContext.readRootValue(DefaultDeserializationContext.java:322) ~[task/:?]
at com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:4674) ~[task/:?]
at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:3666) ~[task/:?]
at com.amazonaws.athena.connector.lambda.handlers.CompositeHandler.handleRequest(CompositeHandler.java:99) [task/:?]
at lambdainternal.EventHandlerLoader$2.call(EventHandlerLoader.java:899) [aws-lambda-java-runtime-0.2.0.jar:?]
at lambdainternal.AWSLambda.startRuntime(AWSLambda.java:268) [aws-lambda-java-runtime-0.2.0.jar:?]
at lambdainternal.AWSLambda.startRuntime(AWSLambda.java:206) [aws-lambda-java-runtime-0.2.0.jar:?]
at lambdainternal.AWSLambda.main(AWSLambda.java:200) [aws-lambda-java-runtime-0.2.0.jar:?]
Expected field name token but got END_OBJECT: java.lang.IllegalStateException
java.lang.IllegalStateException: Expected field name token but got END_OBJECT
at com.amazonaws.athena.connector.lambda.serde.BaseDeserializer.assertFieldName(BaseDeserializer.java:221)
at com.amazonaws.athena.connector.lambda.serde.BaseDeserializer.getType(BaseDeserializer.java:295)
at com.amazonaws.athena.connector.lambda.serde.DelegatingDeserializer.doDeserialize(DelegatingDeserializer.java:56)
at com.amazonaws.athena.connector.lambda.serde.DelegatingDeserializer.deserializeWithType(DelegatingDeserializer.java:49)
at com.fasterxml.jackson.databind.deser.impl.TypeWrappedDeserializer.deserialize(TypeWrappedDeserializer.java:74)
at com.fasterxml.jackson.databind.deser.DefaultDeserializationContext.readRootValue(DefaultDeserializationContext.java:322)
at com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:4674)
at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:3666)
at com.amazonaws.athena.connector.lambda.handlers.CompositeHandler.handleRequest(CompositeHandler.java:99)
END RequestId: 98cdd329-b0e5-49c4-8c9e-ae7b6f51dc2a
REPORT RequestId: 98cdd329-b0e5-49c4-8c9e-ae7b6f51dc2a Duration: 343.41 ms Billed Duration: 344 ms Memory Size: 3008 MB Max Memory Used: 170 MB Init Duration: 2665.96 ms

Error with prysm beacon-chain with testnet pyrmont

I am trying to run a beacon-chain for Ethereum2.0 in the pyrmont testnet with Prysm and Besu.
I run the ETH1 node with the command :
besu --network=goerli --data-path=/root/goerliData --rpc-http-enabled
This command is working and download the entire blockchain, then run properly.
But when I launch :
./prysm.sh beacon-chain --http-web3provider=localhost:8545 --pyrmont
I get :
Verified /root/prysm/dist/beacon-chain-v1.0.0-beta.3-linux-amd64 has been signed by Prysmatic Labs.
Starting Prysm beacon-chain --http-web3provider=localhost:8545 --pyrmont
[2020-11-18 14:03:06] WARN flags: Running on Pyrmont Testnet
[2020-11-18 14:03:06] INFO flags: Using "max_cover" strategy on attestation aggregation
[2020-11-18 14:03:06] INFO node: Checking DB database-path=/root/.eth2/beaconchaindata
[2020-11-18 14:03:08] ERROR main: database contract is xxxxxxxxxxxx3fdc but tried to run with xxxxxxxxxxxx6a8c
I tried to delete the previous data folder /root/goerliData and re-download the blockchain but nothing changed...
Why does the database contract didn't change and what should I do ?
Thanks :)
The error means that you have an existing database for another network, probably medalla.
Try starting your beacon node with the flag --clear-db next time, and you'll see it the error disappear and start syncing Pyrmont.

Connecting to CloudSQL Mysql over ssl from external application

I am trying to get a sample java application to connect to a Mysql gen2 instance I have in GCP. I use SSL and the ip address is whitelisted. I have confirmed connectivity to the instance using the mysql command line and passing in the client-cert.pem, client-key.pem and the server-ca.pem. Now inorder to connect to it from the spring boot java application, I did the following:
created a p12 file from the client cert and key and added it to keystore.jks
created a truststore with the server-ca.pem file.
Added this code in the main before the connection is created:
System.setProperty("javax.net.debug", "all");
System.setProperty("javax.net.ssl.trustStore", TRUST_STORE_PATH);
System.setProperty("javax.net.ssl.trustStorePassword", "fake_password");
System.setProperty("javax.net.ssl.keyStore", KEY_STORE_PATH);
System.setProperty("javax.net.ssl.keyStorePassword", "fake_password");
For the jdbc url, I used : jdbc:mysql://1.1.1.1:3306/sampledb?useSSL=true&requireSSL=true
However I am unable to connect to the instance and see this error from the java ssl debug:
restartedMain, RECV TLSv1.1 ALERT: fatal, unknown_ca
%% Invalidated: [Session-2, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA]
restartedMain, called closeSocket()
restartedMain, handling exception: javax.net.ssl.SSLHandshakeException: Received fatal alert: unknown_ca
restartedMain, called close()
restartedMain, called closeInternal(true)
I also tried to run
openssl verify -CAfile server-ca.pem client-cert.pem`
and got this output:
error 20 at 0 depth lookup:unable to get local issuer certificate`
Any ideas on what I might be doing wrong?

Why does my openshift app timeout when I try to access the URL?

I am trying to set up a BrowserQuest server that runs in openshift
I've been following this readme. Everything seems to go fine, I get to the end and run rhc app show bq and get the following output:
bq # http://bq-plantagenet.rhcloud.com/ (uuid: 55e4311189f5cf028d0000fc)
------------------------------------------------------------------------
Domain: plantagenet
Created: 8:18 AM
Gears: 1 (defaults to small)
Git URL: ssh://55e4311189f5cf028d0000fc#bq-plantagenet.rhcloud.com/~/git/bq.git/
SSH: 55e4311189f5cf028d0000fc#bq-plantagenet.rhcloud.com
Deployment: auto (on git push)
nodejs-0.10 (Node.js 0.10)
--------------------------
Gears: Located with smarterclayton-redis-2.6
smarterclayton-redis-2.6 (Redis)
--------------------------------
From: http://cartreflect-claytondev.rhcloud.com/reflect?github=smarterclayton/openshift-redis-cart
Website: https://github.com/smarterclayton/openshift-redis-cart
Gears: Located with nodejs-0.10
But when I try to access http://bq-plantagenet.rhcloud.com:8080/ in a browser, I get:
The connection has timed out
The server at bq-plantagenet.rhcloud.com is taking too long to respond
My questions are what is going wrong and how can I fix it? Many thanks for your consideration in reading through this and any suggestions you might have for resolving it
You need to access http://bq-plantagenet.rhcloud.com, leave off the port 8080, that is the port you listen on internally. You should also try checking your log files (https://developers.openshift.com/en/managing-log-files.html) to see what errors your application is producing.

Unable to access Google Compute Engine instance using external IP address

I have a Google compute engine instance(Cent-Os) which I could access using its external IP address till recently.
Now suddenly the instance cannot be accessed using its using its external IP address.
I logged in to the developer console and tried rebooting the instance but that did not help.
I also noticed that the CPU usage is almost at 100% continuously.
On further analysis of the Serial port output it appears the init module is not loading properly.
I am pasting below the last few lines from the serial port output of the virtual machine.
rtc_cmos 00:01: RTC can wake from S4
rtc_cmos 00:01: rtc core: registered rtc_cmos as rtc0
rtc0: alarms up to one day, 114 bytes nvram
cpuidle: using governor ladder
cpuidle: using governor menu
EFI Variables Facility v0.08 2004-May-17
usbcore: registered new interface driver hiddev
usbcore: registered new interface driver usbhid
usbhid: v2.6:USB HID core driver
GRE over IPv4 demultiplexor driver
TCP cubic registered
Initializing XFRM netlink socket
NET: Registered protocol family 17
registered taskstats version 1
rtc_cmos 00:01: setting system clock to 2014-07-04 07:40:53 UTC (1404459653)
Initalizing network drop monitor service
Freeing unused kernel memory: 1280k freed
Write protecting the kernel read-only data: 10240k
Freeing unused kernel memory: 800k freed
Freeing unused kernel memory: 1584k freed
Failed to execute /init
Kernel panic - not syncing: No init found. Try passing init= option to kernel.
Pid: 1, comm: swapper Not tainted 2.6.32-431.17.1.el6.x86_64 #1
Call Trace:
[] ? panic+0xa7/0x16f
[] ? init_post+0xa8/0x100
[] ? kernel_init+0x2e6/0x2f7
[] ? child_rip+0xa/0x20
[] ? kernel_init+0x0/0x2f7
[] ? child_rip+0x0/0x20
Thanks in advance for any tips to resolve this issue.
Mathew
It looks like you might have an script or other program that is causing you to run out of Inodes.
You can delete the instance without deleting the persistent disk (PD) and create a new vm with a higher capacity using your PD, however if it's an script causing this, you will end up with the same issue. It's always recommended to backup your PD before making any changes.
Run this command to find more info about your instance:
gcutil --project= getserialportoutput
If the issue still continue, you can either
- Make a snapshot of your PD and make a PD's copy or
- Delete the instance without deleting the PD
Attach and mount the PD to another vm as a second disk, so you can access it to find what is causing this issue. Visit this link https://developers.google.com/compute/docs/disks#attach_disk for more information on how to do this.
Visit this page http://www.ivankuznetsov.com/2010/02/no-space-left-on-device-running-out-of-inodes.html for more information about inodes troubleshooting.
Make sure the Allow HTTP traffic setting on the vm is still enabled.
Then see which network firewall you are using and it's rules.
If your network is set up to use an ephemral IP, it will be periodically released back. This will cause your IP to change over time. Set it to static/reserved then (on networks page).
https://developers.google.com/compute/docs/instances-and-network#externaladdresses