Couchbase Sync Gateway cannot create indexes - couchbase

I receive an error when I try to start my sync gateway:
[INF] Successfully opened bucket sync
[INF] Set query timeouts for bucket sync to cluster:1m15s, bucket:1m15s
[INF] Initializing indexes with numReplicas: 1…
[INF] Verifying index availability for bucket sync…
[INF] Indexes ready for bucket sync.
[INF] delta_sync enabled=false with rev_max_age_seconds=86400 for database fwws-cluster-default
[INF] Created background task: “CleanAgedItems” with interval 10m0s
[INF] Created background task: “InsertPendingEntries” with interval 2.5s
[INF] Created background task: “CleanSkippedSequenceQueue” with interval 30m0s
[ERR] cbgt index creation failed: manager_api: could not create index, indexDefs.ImplVersion: “NS41LjA=” > mgr.version: 5.5.0 – base.(*CbgtContext).StartManager() at dcp_sharded.go:298
[ERR] Error opening database my_database: manager_api: could not create index, indexDefs.ImplVersion: “NS41LjA=” > mgr.version: 5.5.0 – rest.RunServer() at config.go:1028
Does anyone know what can trigger an error like this?

I’ve managed to fix this by deleting few documents:
_sync:cfgindexDefs
_sync:cfgnodeDefs-known
_sync:cfgnodeDefs-wanted
_sync:cfgplanPIndexes
_sync:cfgversion
After starting the syncgateway, all of the docs were recreated and the error - gone. I am not sure wether this is recommended or not(little information on the internet) but it solved the issues.

Related

Replica stopped replicating over 1525 error

Our AWS MySQL RDS replica database stopped replicating over this error. What is the best solution to approach it? If it keeps happening I doubt we can have a replica. We also can't just allow any date format to come in. Should we fix the problem or ignore the error.
The data type of last_used is datetime(6).
2022-09-15T17:11:10.044407Z 15395 [ERROR] [MY-010584] [Repl] Slave SQL for channel '': Worker 1 failed executing transaction 'ANONYMOUS' at master log mysql-bin-changelog.041268, end_log_pos 17028740; Error 'Incorrect DATETIME value: '2022-09-15 13:11:10.-99999'' on query. Default database: 'company_name'. Query: 'UPDATE numbers SET current_url = www',last_used = '2022-09-15 13:11:10.000001' WHERE tracking IN (8886424548) AND profile = 111111 AND (last_used < '2022-09-15 13:11:10.-99999' OR last_used IS NULL)', Error_code: MY-001525
We have now tried using 101 ms or 99 ms and it returns the correct value. So we have changed it to use 101 ms. It would be nice to know how we can keep this from happening in all cases. Thanks!
AWS' team came back with an answer on this error:
As per the error, you are using wrong DATETIME format '2022-09-15 13:11:10.-99999' for the UPDATE query on the table 'company'. The DATETIME format which you are using is having negative precision for ms. in order to fix this error kindly change the ms value to positive. fractional seconds can never be negative.

Drill "VALIDATION ERROR: A table or view with given name already exists in schema" for empty directory

After upgrading drill on our cluster to drill-1.12.0-mapr, testing our daily ETL scripts (which all use drill for converting parquet files to tsv), a validation error ("table or view with given name already exists") is always thrown when trying to run a CREATE TABLE statement on some empty directories in a writable workspace.
[Error Id: 6ea46737-8b6a-4887-a671-4bddbea02476 on mapr002.ucera.local:31010]
at org.apache.drill.jdbc.impl.DrillCursor.nextRowInternally(DrillCursor.java:489)
at org.apache.drill.jdbc.impl.DrillCursor.loadInitialSchema(DrillCursor.java:561)
:
:
:
Caused by: org.apache.drill.common.exceptions.UserRemoteException: VALIDATION ERROR: A table or view with given name [/internal_etl/project/version-2/stages/storage/ACCOUNT/tsv] already exists in schema [dfs.etl_internal]
After some brief debugging, I see that the FS directory in question under the specified dfs.etl_interal workspace (ie. /internal_etl/project/version-2/stages/storage/ACCOUNT/tsv) is in fact empty, yet still throwing these errors.
Looking for the error ID in the drillbit.log file in the associated node in the error message above, we see
2018-12-04 10:13:25,285 [23f92019-db56-862f-e7b9-cd51b3e174ae:foreman] INFO o.a.drill.exec.work.foreman.Foreman - Query text for query id 23f92019-db56-862f-e7b9-cd51b3e174ae: create table dfs.etl_internal.`/internal_etl/project/version-2/stages/storage/ACCOUNT/tsv` as
select <a bunch of fields>
from dfs.etl_internal.`/internal_etl/project/version-2/stages/storage/ACCOUNT/parquet`
2018-12-04 10:13:25,406 [23f92019-db56-862f-e7b9-cd51b3e174ae:foreman] INFO o.a.d.exec.store.dfs.FileSelection - FileSelection.getStatuses() took 0 ms, numFiles: 1
2018-12-04 10:13:25,408 [23f92019-db56-862f-e7b9-cd51b3e174ae:foreman] INFO o.a.d.exec.store.dfs.FileSelection - FileSelection.getStatuses() took 0 ms, numFiles: 1
2018-12-04 10:13:25,893 [23f92019-db56-862f-e7b9-cd51b3e174ae:foreman] INFO o.a.d.exec.store.dfs.FileSelection - FileSelection.getStatuses() took 0 ms, numFiles: 1
2018-12-04 10:13:25,894 [23f92019-db56-862f-e7b9-cd51b3e174ae:foreman] INFO o.a.d.exec.store.dfs.FileSelection - FileSelection.getStatuses() took 0 ms, numFiles: 1
2018-12-04 10:13:25,898 [23f92019-db56-862f-e7b9-cd51b3e174ae:foreman] INFO o.a.d.exec.store.dfs.FileSelection - FileSelection.getStatuses() took 0 ms, numFiles: 1
2018-12-04 10:13:25,898 [23f92019-db56-862f-e7b9-cd51b3e174ae:foreman] INFO o.a.d.exec.store.dfs.FileSelection - FileSelection.getStatuses() took 0 ms, numFiles: 1
2018-12-04 10:13:25,905 [23f92019-db56-862f-e7b9-cd51b3e174ae:foreman] INFO o.a.d.e.p.s.h.CreateTableHandler - User Error Occurred: A table or view with given name [/internal_etl/project/version-2/stages/storage/ACCOUNT/tsv] already exists in schema [dfs.etl_internal]
org.apache.drill.common.exceptions.UserException: VALIDATION ERROR: A table or view with given name [/internal_etl/project/version-2/stages/storage/ACCOUNT/tsv] already exists in schema [dfs.etl_internal]
[Error Id: 45177abc-7e9f-4678-959f-f9e0e38bc564 ]
at org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:586) ~[drill-common-1.12.0-mapr.jar:1.12.0-mapr]
at org.apache.drill.exec.planner.sql.handlers.CreateTableHandler.checkTableCreationPossibility(CreateTableHandler.java:326) [drill-java-exec-1.12.0-mapr.jar:1.12.0-mapr]
at org.apache.drill.exec.planner.sql.handlers.CreateTableHandler.getPlan(CreateTableHandler.java:90) [drill-java-exec-1.12.0-mapr.jar:1.12.0-mapr]
at org.apache.drill.exec.planner.sql.DrillSqlWorker.getQueryPlan(DrillSqlWorker.java:131) [drill-java-exec-1.12.0-mapr.jar:1.12.0-mapr]
at org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan(DrillSqlWorker.java:79) [drill-java-exec-1.12.0-mapr.jar:1.12.0-mapr]
at org.apache.drill.exec.work.foreman.Foreman.runSQL(Foreman.java:567) [drill-java-exec-1.12.0-mapr.jar:1.12.0-mapr]
at org.apache.drill.exec.work.foreman.Foreman.run(Foreman.java:264) [drill-java-exec-1.12.0-mapr.jar:1.12.0-mapr]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_151]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_151]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_151]
2018-12-04 10:13:25,924 [23f92019-db56-862f-e7b9-cd51b3e174ae:foreman] INFO o.apache.drill.exec.work.WorkManager - Waiting for 0 queries to complete before shutting down
2018-12-04 10:13:25,924 [23f92019-db56-862f-e7b9-cd51b3e174ae:foreman] INFO o.apache.drill.exec.work.WorkManager - Waiting for 0 running fragments to complete before shutting down
This error occurs even when using DROP TABLE [IF EXISTS] <workspace>.<table path name> before the CREATE TABLE statement. Furthermore, the configurations for the dfs workspace itself does not appear to be changed from before upgrading to drill-1.12, see below:
:
:
"workspaces": {
"root": {
"location": "/",
"writable": false,
"defaultInputFormat": null,
"allowAccessOutsideWorkspace": false
},
"tmp": {
"location": "/tmp",
"writable": true,
"defaultInputFormat": null,
"allowAccessOutsideWorkspace": false
},
"etl_internal": {
"location": "/etl/internal",
"writable": true,
"defaultInputFormat": null,
"allowAccessOutsideWorkspace": false
}
},
:
:
Note that the full process in question is intended to mv the directory contents every day and CREATE TABLE with new data from current day (in case that makes a difference) and this process had been working fine when we were using drill-1.11.
More debugging information:
Simply deleting the .../tsv endpoint folder and relying on drill to make the directory during the CREATE TABLE statement does not work. Throws the unsurprising error
Error: VALIDATION ERROR: Table [/internal_etl/project/version-2/stages/storage/ACCOUNT/tsv] not found
[Error Id: 02e7c088-9162-4731-9fa8-85dfd39e1dec on mapr001.ucera.local:31010] (state=,code=0)
Ie. drill does not appear to be automatically creating the table.
Undoing these changes and rerunning to get the original error, we can examine the location via the sqlline interpreter interface. Doing so, we see
0: jdbc:drill:zk=mapr001:5181,mapr002:5181,ma> describe dfs.etl_internal.`/internal_etl/project/version-2/stages/storage/ACCOUNT/tsv`;
+--------------+------------+--------------+
| COLUMN_NAME | DATA_TYPE | IS_NULLABLE |
+--------------+------------+--------------+
+--------------+------------+--------------+
No rows selected (1.791 seconds)
So it sees something there, but only when I make it myself, which is like a catch-22 given that the original error is complaining that something is already there.
If anyone with more experience using drill knows what could be happening here, any opinions or advice would be appreciated.
Looks like you have made some mistake in the process of updating Drill version on your MapR cluster. Please see this doc for more info: http://doc.mapr.com/display/MapR/Upgrading+to+the+Latest+Version+of+Drill
or the last docs in case you are using the latest MapR Core version:
https://mapr.com/docs/home/UpgradeGuide/PreupgradeStepsDrill.html?hl=drill%2Cupgrade
https://mapr.com/docs/home/UpgradeGuide/PostUpgradeStepsDrill.html?hl=drill%2Cupgrade
DROP TABLE for Drill schemaless tables works fine. See more info about Drill schemaless tables (empty directories):
https://drill.apache.org/docs/data-sources-and-file-formats-introduction/#schemaless-tables
TLDR: restarted the drillbits on the nodes and everything appears to be working now.
What was done to get drill to run the CTAS statement without error was:
Restart the drill services from the MapR MCS. This was done purely based
on a hunch due to the hanging-drill-1.11-processes issue encountered
earlier where after upgrading from drill-1.11 to drill-1.12, had problem where needed to manually go to each node, jps to see that drillbit 1.11 was still running, and kill -9 <pid of 1.11 drillbit>, and restart the drillbits to get 1.12 working. Not sure how much this helped,
but documenting as it was the only change made in the process of
debugging that was not undone before running
the the changes that ultimately appear to have now resolved the
error.
Changed the drill-using scripts to delete the target folder (hadoop fs -rm -r /hdfs/path/to/folder) of the CTAS statement after running some necessary processes on it and then letting the CTAS statement re-create it itself (even though as previously mentioned in original post, tried this earlier and received "Table not found" errors in a weird catch-22 situation (thus my thinking that restarting the drill services may have contributed)).
I know that just restarting the services may not be the best most informative answer, but that's what appeared to work here. If anyone as any more information or thoughts to add based on the solution description above do please leave a comment.

Progress SQL error in ssis package: buffer too small for generated record

I have an ssis package which uses SQL command to get data from Progress database. Every time I execute the query, it throws this specific error:
ERROR [HY000] [DataDirect][ODBC Progress OpenEdge Wire Protocol driver][OPENEDGE]Internal error -1 (buffer too small for generated record) in SQL from subsystem RECORD SERVICES function recPutLONG called from sts_srtt_t:::add_row on (ttbl# 4, len/maxlen/reqlen = 33/32/33) for . Save log for Progress technical support.
I am running the following query:
Select max(ROWID) as maxRowID from TableA
GROUP BY ColumnA,ColumnB,ColumnC,ColumnD
I've had the same error.
After change startup-parameter -SQLTempStorePageSize and -SQLTempStoreBuff to 24 and 3000 respectively the problem was solved.
I think, for you the values must be changed to 40 and 20000.
You can find more information here. The name of the parameter in that article was a bit different than in my Database, it depends on the Progress-version witch is used.

REP-57054: 'In Process Job Terminated' error while executing oracle reports

When i am executing oracle reports i got above mentioned error.
I m using query with three formula columns and generating XML for RTF.
All formula columns compiled successfully. how to resolve this issue?
Error While Executing
Workaround of setting cacheSize to 50 in $INST_TOP/ora/10.1.2/reports/conf/rwbuilder.conf while waiting for this fix.
When cacheSize is 0 in server conf file, Cache.manage() removes output files in cache directory after a request is successfully finished."
But a non-zero value disables the cache clean up functionality.
Check more details Oracle Doc ID 1237834.1

Unload from Redshift to S3 fails

I'm running the following command in Redshift:
myDB=> unload ('select * from (select * from myTable limit 2147483647);')
to 's3://myBucket/'
credentials 'aws_access_key_id=***;aws_secret_access_key=***';
Here is what I get back:
ERROR: S3ServiceException:The bucket you are attempting to access must be addressed
using the specified endpoint. Please send all future requests to this
endpoint.,Status 301,Error PermanentRedirect,Rid 85ACD9FFAFC5CE8F,
ExtRid vsz4/0NdOAYbaJ48WYCnrYBCvuuL0cBTdcEN
DETAIL:
-----------------------------------------------
error: S3ServiceException:The bucket you are attempting to access must be addressed
using the specified endpoint. Please send all future requests to this
endpoint.,Status 301,Error PermanentRedirect,Rid 85ACD9FFAFC5CE8F,
ExtRid vsz4/0NdOAYbaJ48WYCnrYBCvuuL0cBTdcEN
code: 8001
context: Listing bucket=myBucket prefix=
query: 0
location: s3_unloader.cpp:181
process: padbmaster [pid=19100]
-----------------------------------------------
Any thoughts? Or maybe ideas how to dump data from Redshift into MySQL or something similar?
The error message is returned when using path like syntax with a non US bucket. Create a new bucket in the same region as your redshift cluster and everything should work.
You are missing the prefix part of the filename. Try using s3://myBucket/myPrefix