I am trying to integrate Deepnote and Notion. I already connected the database with the integration in Notion, I added the environment variables correctly but when executing node index.js as suggested in Notion's documentation for integrations I get this error:
#notionhq/client warn: request fail {
code: 'object_not_found',
message: 'Could not find database with ID: ********************************. Make sure the relevant pages and databases are shared with your integration.'
}
The database is correctly sharing info with Deepnote integration: check this screenshot.
Any clues? Thanks so much <3
I am trying to connect Notion as database for Deepnote graphics so then can be iframed into Notion.
Related
I have asp.net core application hosted on GCP App Engine. When I try to deploy the application it fails on last step:
Updating service [name] (this may take several minutes)... ...failed
ERROR: (gcloud.app.deploy) Error Response: [9] An internal error occurred while processing task /app-engine-flex/flex_await_healthy/flex_await_healthy>blablabla.wm.1
The exception stack trace show that service running in background couldn't find MySQL table (that table obviously exists).
my app.yaml file:
service: XXX
runtime: custom
env: flex
automatic_scaling:
max_concurrent_requests: 80
min_num_instances: 1
max_num_instances: 1
resources:
cpu: XXX
memory_gb: XXX
beta_settings:
cloud_sql_instances: "XXX:XXXX:XXXX=tcp:3306"
It looks like the application is deployed properly despite the error. This is the only error and backgroud service desn't throw any exceptions at later point. In fact it works properly and can connect to the database.
My guess was that maybe GCP is checking health while the application is not connected do database. So I tried to add liveness_check and readiness_check to app.yaml and configured dedicated /healthcheck endpoint in my application but it didn't make any change.
Any ideas how to fix it and what might be a cause?
Deploying app with new version fixed the issue
I am trying to establish an event subscription via zmq from my locally running sawtooth network. As soon as I start my event-subscriber container, I get the error "interrupted system call".
I am following the example from here https://github.com/danintel/sawtooth-cookiejar/tree/master/events/go
I have tried using validatorUrl as tcp://localhost:4004 tcp://validator-0:4004
note: validator-0 is my local container name for the validator
Also, have tried with the direct IP of the validator container tcp://<IP>:4004
zmqConnection.RecvMsgWithId() is throwing the error.
The error I am getting is exactly at this line https://github.com/danintel/sawtooth-cookiejar/blob/master/events/go/src/events_client.go#L105
Can someone please help for the probable reasons or the way I can debug this one?
I do not know, but one possible cause is this example was recently updated to a new version of Go, 1.11 (from 1.9) after your posting:
https://github.com/danintel/sawtooth-cookiejar/pull/9
Because of this error:
Loading input failed: unsupported version of go: exit status 2: flag provided but not defined: -compiled
The issue was related to inter-pod communication. So the issue was, my event subscriber client was in a completely different pod than the pod where the validator container was running. In that case, we need to used the FQDN of that pod. Refer to the link below.
https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-hostname-and-subdomain-fields
I'm using symfony 4, What does error stands for ?
Warning: SessionHandler::read(): open(/var/lib/php/sessions/sess_634q91mh896b6aa4jpjvlihmar, O_RDWR) failed: Permission denied (13)
When a user logs into a Symfony application, session information is stored on the web server. By default Symfony uses the native PHP session mechanism, storing session info in a file in /var/lib/php/sessions/ on Linux systems. Your error message is output by PHP and means it got a permissions error creating or re-opening a session file.
The error appears only intermittently because PHP removes old session files randomly about every 1/100th or 1/1000th page load. (On some Linux variants, old session files are removed by a cron job instead.)
https://symfony.com/doc/current/session.html says:
"some session expiration related options may not work as expected if other applications that write to the same directory have short max lifetime settings."
Try to avoid having multiple processes writing to the same sessions directory. I think I got the error message because both an Apache web server and php bin/console server:start were running at the same time. One process may have removed the other process's session file.
See PHP manual and Symfony manual for how to configure writing to separate directories. For example, I changed {Symfony directory}/config/packages/framework.yaml:
# Enables session support. Note that the session will ONLY be started if you read or write from it.
# Remove or comment this section to explicitly disable session support.
session:
# handler_id: ~
cookie_secure: auto
cookie_samesite: lax
handler_id: 'session.handler.native_file'
save_path: '%kernel.project_dir%/var/sessions/%kernel.environment%'
gc_probability: 100 # Run garbage collection always for
gc_divisor: 100 # investigating this problem only.
Another possibility is a problem in your Symfony code can cause the error message. The Symfony documentation says not to call the PHP session functions like session_start() directly since Symfony classes call them. A bug in my code caused an exception which I speculate caused the error message.
Related stack overflow questions: cleanup-php-session-files and how-does-php-know-when-to-delete-a-session
For those familiar with C code, see the PHP interpreter source code line that prints the error here
Hope this helps!
Not related to symfony 4, but you have to fix permissions in your /var/lib/php/sessions/ directory
The goal is to be able to send messages using AWS SQS+SNS. This has been a struggle for a few days and I don't know how to make it work.
Symfony 4.2 has a new component, messenger that I wanted to use. It is supposed to work with php-enqueue as a third party transport. I am using that to connect to AWS SQS+SNS.
I can't find any documentation that puts it all together. I see how php-enqueue connects to AWS, but the docs show the config in the code and not in the config yaml or .env files. That is a problem since I want Messenger/enqueue to handle the behind-the-scenes stuff.
I was able to make Symfony Messenger work without php-enqueue for local synchronous messages. But after that... Clearly I am not doing it right. I was hoping someone might have a boilerplate for this configuration.
Here is where I am at. I am just trying to send a message using SQS. I am getting an error:
Error executing "GetQueueUrl" on "https://sqs.us-west-2.amazonaws.com";
AWS HTTP error: Client error: `POST https://sqs.us-west-2.amazonaws.com`
resulted in a `400 Bad Request`
I tried many permutations of keys in the enqueue.yaml file but did not get it right. I used this for help but could not get it to work. https://enqueue.readthedocs.io/en/stable/bundle/config_reference/
->> Edit: I found that you can add the topic and queue names to the DSN. I no longer get the error and a topic is created, but the Queue is not. Now, the message bus is working, but synchronously and locally. No message is sent to AWS.
These are the Composer libs I installed. I am sure that there are too many, but I kept trying to make it work.
"aws/aws-sdk-php": "^3.19",
"enqueue/amqp-lib": "^0.9.8",
"enqueue/enqueue-bundle": "^0.9.8",
"enqueue/messenger-adapter": "^0.2.2",
"enqueue/snsqs": "^0.9.0",
"guzzlehttp/guzzle": "^6.0",
"symfony/amqp-pack": "^1.0",
"symfony/messenger": "4.2.*",
This is my messenger.yaml
framework:
messenger:
transports:
amqp: 'enqueue://default?topic[name]=testQ&queue[name]=testQ'
routing:
# Route your messages to the transports
'App\Message\SmsMessage': amqp
This is enqueue.yaml
enqueue:
default:
transport:
dsn: '%env(resolve:ENQUEUE_DSN)%'
client:~
This is the entry in .env
###> enqueue/enqueue-bundle ###
ENQUEUE_DSN=snsqs::?key={key}&secret={secret}®ion=us-west-2
###< enqueue/enqueue-bundle ###
This is the code in a controller to send a message:
public function index(MessageBusInterface $messageBus) {
$message = new SmsMessage('This is so cool');
$messageBus->dispatch($message);
...
}
I had this same issue which i managed to fix.
This is my messenger.yaml config that's working with SQS
transports:
sqs:
dsn: enqueue://default?topic[name]=YOURTOPICNAME&queue[name]=YOURQUEUENAME&receiveTimeout=3
Hopefully this is of use to someone
I've been trying to allow a program I am writing to access Google Drive Applications. I have gotten the client secrets information successfully, and have copy and pasted the example code and tried using it to successfully authenticate my program and use the google drive API.
However, when it gets to the line
Credential credential = new AuthorizationCodeInstalledApp(flow, new LocalServerReceiver()).authorize("user");
I get this error. This error has been posted about before, and I've tried essentially every solution. I've elevated both my program and all the java.exe files to administrator and tried running the program and I still got this error.
The full error is:
Oct 03, 2015 11:48:39 AM com.google.api.client.util.store.FileDataStoreFactory setPermissionsToOwnerOnly
WARNING: unable to change permissions for everybody: D:\directory
Oct 03, 2015 11:48:39 AM com.google.api.client.util.store.FileDataStoreFactory setPermissionsToOwnerOnly
WARNING: unable to change permissions for owner: D:\directory
I've also tried overriding the setPermissionToOwnerOnly when I instantiated the FileDataStoreFactory but that failed as well.
I have tried the following solutions:
http://stackoverflow.com/questions/30634827/warning-unable-to-change-permissions-for-everybody
http://stackoverflow.com/questions/24382069/error-while-executing-google-prediction-api-command-line-sample
https://groups.google.com/forum/#!topic/google-analytics-data-export-api/-7BH7Z40gkw (where the client secret data was hard coded into the program, this is bad, I know, but it didn't work anyway)
I don't know what to do at this point. I am running my program off a flash drive, and I tried running it off my computer as well, but it still failed. I am using NetBeans 8.0.2.
The error comes up as a warning, so maybe there is some way to just ignore the warning and proceed? That could be a solution, but I've researched and I'm not sure if that's a possibility. I am running windows 10 if that matters.
I just ran the Drive REST API example Java Quickstart tutorial through Eclipse and is working fine. It does requires a bit of setup time if you have not install Gradle (also Eclipse Marketplace has a plugin for Gradle).
To your point, I did get the same warning messages. However, it happened for me during the load client secret in the authorize() method.
public static Credential authorize() throws IOException {
// Load client secrets.
InputStream in =
DriveQuickstart.class.getResourceAsStream("/client_secret.json");
GoogleClientSecrets clientSecrets =
GoogleClientSecrets.load(JSON_FACTORY, new InputStreamReader(in));
I suspect this is where your issue is happening. Since, I am not able to see that part from your code snippet, have a look at where your client_secret.json file is located.
Hope this helps. Good luck!