We have several shared schedules on an SSRS 2019 server that keep failing. I have tried removing the end date and changing the end date to different dates in the future.
We have found scripts that show us the scheduled job ID so we can look at the job on the server.
What else should we be doing to help find the cause of this?
I'm not sure if this error is related. The job ran this morning at 6am and expired some time after (but I don't know how to tell exactly when). For the error related to the user account, this user has access to the report and is active.
library!ReportServer_0-1!11f0!06/27/2022-07:30:33:: i INFO: Entering StreamRequestHandler.ExecuteCommand - Command = StyleSheet
library!ReportServer_0-1!11f0!06/27/2022-07:30:33:: i INFO: Exiting StreamRequestHandler.ExecuteCommand - Command = StyleSheet (success)
library!ReportServer_0-1!4014!06/27/2022-07:30:33:: e ERROR: Throwing Microsoft.ReportingServices.Diagnostics.Utilities.ReportServerStorageException: , An error occurred within the report server database. This may be due to a connection failure, timeout or low disk condition within the database.;
session!ReportServer_0-1!4014!06/27/2022-07:30:33:: e ERROR: Error in getting session data: Invalid or Expired Session: 3js___Redacted____
session!ReportServer_0-1!4014!06/27/2022-07:30:33:: i INFO: LoadSnapshot: Item with session: 3js___Redacted____, reportPath: , userName: GHS2000\username_redacted_for_post not found in the database
library!ReportServer_0-1!4014!06/27/2022-07:30:33:: e ERROR: Throwing Microsoft.ReportingServices.Diagnostics.Utilities.ExecutionNotFoundException: , Microsoft.ReportingServices.Diagnostics.Utilities.ExecutionNotFoundException: The report execution 3js___Redacted____ has expired or cannot be found.;
library!ReportServer_0-1!4014!06/27/2022-07:30:33:: i INFO: Call to GetItemTypeAction(/Scheduled/Hospital Managers/Medication Barcoding Compliance/Medication Barcode Compliance). User: GHS2000\username_redacted_for_post.
library!ReportServer_0-1!4014!06/27/2022-07:30:33:: e ERROR: Throwing Microsoft.ReportingServices.Diagnostics.Utilities.ReportServerStorageException: , An error occurred within the report server database. This may be due to a connection failure, timeout or low disk condition within the database.;
session!ReportServer_0-1!4014!06/27/2022-07:30:33:: e ERROR: Error in getting session data: Invalid or Expired Session: 3js___Redacted____
session!ReportServer_0-1!4014!06/27/2022-07:30:33:: i INFO: LoadSnapshot: Item with session: 3js___Redacted____, reportPath: , userName: GHS2000\username_redacted_for_post not found in the database
library!ReportServer_0-1!4014!06/27/2022-07:30:33:: e ERROR: Throwing Microsoft.ReportingServices.Diagnostics.Utilities.ExecutionNotFoundException: , Microsoft.ReportingServices.Diagnostics.Utilities.ExecutionNotFoundException: The report execution 3js___Redacted____ has expired or cannot be found.;
Related
I have a MySQL database instance in AWS RDS. I'd like to access it using AWS Athena. I used the "Amazon Athena Lambda MySQL Connector" to set up the new Data Source:
https://github.com/awslabs/aws-athena-query-federation/tree/master/athena-mysql
I installed this using the Serverless Application Repository. Here's the application:
https://serverlessrepo.aws.amazon.com/applications/us-east-1/292517598671/AthenaMySQLConnector
In the application settings before deploying, I used the same SecurityGroupIDs and SubnetIds as I used in another lambda function that is able to query the same database just fine.
In the environment variables for the lambda, I have the connection string set under both the key default and rds_mysql_connection_string (rds_mysql is the name of the Data Source in Athena). The connection string is in the format:
mysql://jdbc:mysql://HOSTNAME.us-east-1.rds.amazonaws.com:3306/DBNAME?user=USERNAME&password=PASSWORD
When I try to switch to the new data source in the Athena query editor, I get this error:
Access denied for user '[USERNAME]'#'[IP]' (using password: YES)
I diff'ed the role for the lambda that can connect against the one for the connector and they're pretty much the same. I even tried giving the connector the exact same role for a minute but it didn't help.
Using the test button on the lambda function for the connector also throws an error, but this could be a red herring. I use a blank test event and I get this in the logs:
WARNING: sun.reflect.Reflection.getCallerClass is not supported. This will impact performance.
Transforming org/apache/logging/log4j/core/lookup/JndiLookup (lambdainternal.CustomerClassLoader#1a6c5a9e)
START RequestId: 98cdd329-b0e5-49c4-8c9e-ae7b6f51dc2a Version: $LATEST
2022-10-20 03:38:51 98cdd329-b0e5-49c4-8c9e-ae7b6f51dc2a INFO BaseAllocator:58 - Debug mode disabled.
2022-10-20 03:38:51 98cdd329-b0e5-49c4-8c9e-ae7b6f51dc2a INFO DefaultAllocationManagerOption:97 - allocation manager type not specified, using netty as the default type
2022-10-20 03:38:51 98cdd329-b0e5-49c4-8c9e-ae7b6f51dc2a INFO CheckAllocator:73 - Using DefaultAllocationManager at memory/DefaultAllocationManagerFactory.class
2022-10-20 03:38:51 98cdd329-b0e5-49c4-8c9e-ae7b6f51dc2a WARN CompositeHandler:107 - handleRequest: Completed with an exception.
java.lang.IllegalStateException: Expected field name token but got END_OBJECT
at com.amazonaws.athena.connector.lambda.serde.BaseDeserializer.assertFieldName(BaseDeserializer.java:221) ~[task/:?]
at com.amazonaws.athena.connector.lambda.serde.BaseDeserializer.getType(BaseDeserializer.java:295) ~[task/:?]
at com.amazonaws.athena.connector.lambda.serde.DelegatingDeserializer.doDeserialize(DelegatingDeserializer.java:56) ~[task/:?]
at com.amazonaws.athena.connector.lambda.serde.DelegatingDeserializer.deserializeWithType(DelegatingDeserializer.java:49) ~[task/:?]
at com.fasterxml.jackson.databind.deser.impl.TypeWrappedDeserializer.deserialize(TypeWrappedDeserializer.java:74) ~[task/:?]
at com.fasterxml.jackson.databind.deser.DefaultDeserializationContext.readRootValue(DefaultDeserializationContext.java:322) ~[task/:?]
at com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:4674) ~[task/:?]
at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:3666) ~[task/:?]
at com.amazonaws.athena.connector.lambda.handlers.CompositeHandler.handleRequest(CompositeHandler.java:99) [task/:?]
at lambdainternal.EventHandlerLoader$2.call(EventHandlerLoader.java:899) [aws-lambda-java-runtime-0.2.0.jar:?]
at lambdainternal.AWSLambda.startRuntime(AWSLambda.java:268) [aws-lambda-java-runtime-0.2.0.jar:?]
at lambdainternal.AWSLambda.startRuntime(AWSLambda.java:206) [aws-lambda-java-runtime-0.2.0.jar:?]
at lambdainternal.AWSLambda.main(AWSLambda.java:200) [aws-lambda-java-runtime-0.2.0.jar:?]
Expected field name token but got END_OBJECT: java.lang.IllegalStateException
java.lang.IllegalStateException: Expected field name token but got END_OBJECT
at com.amazonaws.athena.connector.lambda.serde.BaseDeserializer.assertFieldName(BaseDeserializer.java:221)
at com.amazonaws.athena.connector.lambda.serde.BaseDeserializer.getType(BaseDeserializer.java:295)
at com.amazonaws.athena.connector.lambda.serde.DelegatingDeserializer.doDeserialize(DelegatingDeserializer.java:56)
at com.amazonaws.athena.connector.lambda.serde.DelegatingDeserializer.deserializeWithType(DelegatingDeserializer.java:49)
at com.fasterxml.jackson.databind.deser.impl.TypeWrappedDeserializer.deserialize(TypeWrappedDeserializer.java:74)
at com.fasterxml.jackson.databind.deser.DefaultDeserializationContext.readRootValue(DefaultDeserializationContext.java:322)
at com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:4674)
at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:3666)
at com.amazonaws.athena.connector.lambda.handlers.CompositeHandler.handleRequest(CompositeHandler.java:99)
END RequestId: 98cdd329-b0e5-49c4-8c9e-ae7b6f51dc2a
REPORT RequestId: 98cdd329-b0e5-49c4-8c9e-ae7b6f51dc2a Duration: 343.41 ms Billed Duration: 344 ms Memory Size: 3008 MB Max Memory Used: 170 MB Init Duration: 2665.96 ms
I have followed the tutorials and successfully installed the monitoring and logging agents on my debian9 machine. All statuses ok.
In metrics explorer the gce_instance Disk Usage in bytes works for a few minutes then breaks. I get the following error on my machine:
Aug 04 15:43:23 master collectd[13129]: write_gcm: Unsuccessful HTTP request 400: {
"error": {
"code": 400,
"message": "Field timeSeries[2].points[0].interval.s
tart_time had an invalid value of \"2020-08-04T07:43:22.681979-07:00\": The start time must be before th
e end time (2020-08-04T07:43:22.681979-07:00) for the non-gauge metric 'agent.googleapis.com/agent/api_r
equest_count'.",
"status": "INVALID_ARGUMENT"
}
}
Aug 04 15:43:23 master collectd[13129]: write_gcm: Error talking to the endpoint.
Aug 04 15:43:23 master collectd[13129]: write_gcm: wg_transmit_unique_segment failed.
Aug 04 15:43:23 master collectd[13129]: write_gcm: wg_transmit_unique_segments failed. Flushing.
EDITED
Anyone experiencing these issues, it's a confirmed bug now.
I issued a support ticket in google issue tracker
These error messages are harmless, you are not losing metrics so you can ignore them without any problem.
The root cause is a server-side config change and affects all agents. That change only affected the verbosity of the responses, not the processing of the requests. some of the incoming metrics were silently dropped before that change, and are now dropped noisily.
There is a issue tracker where you can see more details about the issue that are affecting you.
So I've installed SendGrid on GoogleCE with Centos base following the documented instruction from Google:
[https://cloud.google.com/compute/docs/tutorials/sending-mail/using-sendgrid#before-you-begin][1]
Using the test from the command line (various accounts):
echo 'MESSAGE' | mail -s 'SUBJECT' GJ******#gmail.com
the /var/log/maillog says with several lines of 50 or so attempts in 1 second:
postfix/error[32324]: A293210062D7: to=<GJ********#gmail.com>, relay=none, delay=145998, delays=145997/1.2/0/0, dsn=4.0.0, status=deferred (delivery temporarily suspended: SASL authentication failed; server smtp.sendgrid.net[167.89.115.53] said: 535 Authentication failed: The provided authorization grant is invalid, expired, or revoked)
And the message is queued up and retried every few hours. Now, messing around, I could change the port setting from 2525 to one of the regular ports that isn't blocked by google and the email gets bounced right away to the user account in the mail test message.
I made sure to use the api key generated, the SendGrid system say no attempt have been made or bounced or whatever.
There were other errors in the maillog, actually as it tries every second, pages of them, but I change the perms in that directory so no longer, but maybe gives a clue to how it's misconfigured?
Oct 31 19:04:14 beadc postfix/pickup[15119]: fatal: chdir("/var/spool/postfix"): Permission denied
Oct 31 19:04:15 beadc postfix/master[1264]: warning: process /usr/libexec/postfix/qmgr pid 15118 exit status 1
Oct 31 19:04:15 beadc postfix/master[1264]: warning: /usr/libexec/postfix/qmgr: bad command startup -- throttling
Oct 31 19:04:15 beadc postfix/master[1264]: warning: process /usr/libexec/postfix/pickup pid 15119 exit status 1
Oct 31 19:04:15 beadc postfix/master[1264]: warning: /usr/libexec/postfix/pickup: bad command startup -- throttling
The only info I can find searching about the error is that it means a SendGrid misconfiguration.
Any ideas as to what the misconfiguration might be?
I've determined the 535 error was a port/firewall issue. Which means that the 550 error I had on the other port still exists.
Check your firewall settings on 535
[https://cloud.google.com/compute/docs/tutorials/sending-mail/][1]
I am using laravel 5.2 and I have created a task schedule at 1am to do below process:
Get all users (currently around 250 users)
For each user, I created a job (will be executed by queue), which will add users' tasks. Normally 10 task for each user. below is my handle() method for the command class.
public function handle()
{
// get all users
$users = User::all();
$this->info(count($users) . ' total users');
// schedule user tasks in queue
foreach ($users as $user) {
$job = new ScheduleUserTask($user);
$this->bus->dispatch($job);
}
}
Then the job will check users task and insert into tasks table.
I am using database connection as the queue with supervisord.
My supervisord's worker configuration
[program:mytask-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/html/myproject/artisan queue:work database --sleep=1 --tries=3 --daemon
autostart=true
autorestart=true
user=user2
numprocs=4
redirect_stderr=true
stdout_logfile=/var/www/html/myproject/worker.log
Previously when I run supervisor with 1 process (numprocs), it works nicely. However when I start increase the process to 4, I start getting error bellow.
[Illuminate\Database\QueryException] SQLSTATE[HY000]: General error:
1205 Lock wait timeout exceeded; try restarting transaction (SQL:
insert into jobs (queue, attempts, reserved, reserved_at,
available_at, created_at, payload) values ..
From my understanding, this caused by multiple processes / threads running to insert the queue and locking the table.
My question is, what is the maximum number of process that I can run for supervisord regarding my case?
Is it good to increase the innodb_lock_wait_timeout ?
Thanks
I'm using Worklight QA and I got an error trying to send mails using SendGrid.
The error was on the Send activation link for the user.
This is part of the error on celeryd.log
HTTPError: HTTP Error 429: UNKNOWN STATUS CODE
[2014-09-29 13:29:55,549: WARNING/Worker-3] Unable to reach Sentry log server: HTTP
Error 429: UNKNOWN STATUS CODE (url: https://app.getsentry.
com/api/13389/store/, body: Creation of this event was
denied due to rate limiting.)
[2014-09-29 13:29:55,555: ERROR/MainProcess] Failed to submit message: u'error:
[Errno 111] Connection refused'
[2014-09-29 13:29:55,556: WARNING/Worker-3] Failed to submit message: u'error:
[Errno 111] Connection refused'
[2014-09-29 13:29:55,558: ERROR/MainProcess] Task notifications.email.ActivationEmail
[88c97bed-812a-427f-98a1-9bc77ff38876] raised exception:
error(111, 'Connection refused')
I've configured local_settings.py with the SendGrid information, the SendGrid account is provisioned and ready to send mails.
EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend'
EMAIL_HOST = 'smtp.sendgrid.net'
EMAIL_PORT = 587
EMAIL_HOST_USER = '******'
EMAIL_HOST_PASSWORD = '******'
EMAIL_USE_TLS = False
I've also tried to disable iptables on the server thinking on local firewall issue, but It was getting the same error.
I don't know if this rate limiting error from Sentry has something to do with it.
This could possibly be some kind of SMTP integration issue on your end. Not sure Sentry has anything to do with it.
Suggest changing EMAIL_USE_TLS to True and see if that works. It is possible that SendGrid is enforcing that.