When I'm trying to take data backup from couchbase VM using the below command
cbbackup -v http://...:8091 /opt/couchbase/backup -u Administrator -p ******. I'm getting the below error.
2018-10-22 07:13:01,647: mt cbbackup...
2018-10-22 07:13:01,648: mt source : http://**.***.**.***:8091
2018-10-22 07:13:01,648: mt sink : /opt/couchbase/backup
2018-10-22 07:13:01,648: mt opts : {'username': '<xxx>', 'verbose': 1, 'extra':
{'max_retry': 10.0, 'rehash': 0.0, 'dcp_consumer_queue_length': 1000.0, 'data_only': 0.0, 'uncompress': 0.0, 'nmv_retry': 1.0, 'conflict_resolve': 1.0, 'cbb_max_mb': 100000.0, 'report': 5.0, 'mcd_compatible': 1.0, 'try_xwm': 1.0, 'backoff_cap': 10.0, 'batch_max_bytes': 400000.0, 'report_full': 2000.0, 'flow_control': 1.0, 'batch_max_size': 1000.0, 'seqno': 0.0, 'design_doc_only': 0.0, 'allow_recovery_vb_remap': 0.0, 'recv_min_bytes': 4096.0}
, 'collection': None, 'ssl': False, 'threads': 4, 'key': None, 'password': '<xxx>', 'id': None, 'bucket_source': None, 'silent': False, 'dry_run': False, 'single_node': False, 'vbucket_list': None, 'separator': '::', 'mode': 'diff'}
2018-10-22 07:13:01,655: mt Starting new HTTP connection (1): *********
2018-10-22 07:13:01,662: mt bucket: sample_bucket
Exception in thread s3:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 801, in __bootstrap_inner
self.run()
File "/usr/lib/python2.7/threading.py", line 754, in run
self._target(*self.args, **self._kwargs)
File "/opt/couchbase/lib/python/pump_bfd.py", line 646, in run
return self.future_done(future, rv)
UnboundLocalError: local variable 'future' referenced before assignment
I'm using couchbase EE 5.1.1
I'm getting the above error. Any suggestions?
Use cbbackupmgr instead for EE
I ended up reading the documentation and found out for Enterprise edition there's backup manger "cbbackupmgr" which is faster more efficient than cbbackup and cbrestore.
All it requires is to first configure an empty directory as backup directory. For more information pls read the link below.https://docs.couchbase.com/server/5.5/backup-restore/cbbackupmgr-tutorial.html
Related
According to info here, plugin_host is an external process that is used to execute plugin code. Unfortunately I'm getting an error (see title) and I probably need to configure this process but how? Thanks
Please note I'm using 32-bit version (https://www.sublimetext.com/3) on Debian 11.
System packages (zip files) are installed in /usr/local/share/sublime-text/Packages
I created symlink in /usr/local/bin linked to the previous folder
Python is installed in $HOME/.config/sublime-text-3/Lib/python3.3
both sublime.py and sublime_plugin.py files are located in $HOME/.config/sublime-text-3/Lib/ folder
Console output:
UI scale: 1.002 (gtk text scale)
startup, version: 3211 linux x32 channel: stable
executable: /usr/local/bin/sublime
working dir: /
packages path: /home/fox/.config/sublime-text-3/Packages
state path: /home/fox/.config/sublime-text-3/Local
zip path: /usr/local/bin/Packages
zip path: /home/fox/.config/sublime-text-3/Installed Packages
ignored_packages: ["Markdown", "Vintage"]
pre session restore time: 0.320594
startup time: 1.57776
first paint time: 1.8247
error: plugin_host has exited unexpectedly, plugin functionality won't be available until Sublime Text has been restarted
Preferences.sublime-settings:
{
"update_check": false,
"color_scheme": "Packages/dark.scheme",
"tab_size": 4,
"translate_tabs_to_spaces": true,
"trim_automatic_white_space": true,
"trim_trailing_white_space_on_save": true,
"ensure_newline_at_eof_on_save": true,
"detect_indentation" : false,
"copy_with_empty_selection": false,
"find_selected_text": true,
"detect_slow_plugins": false,
"auto_complete_delay": 500,
"font_face" : "Source Code Pro",
"font_options":
[
"directwrite"
],
"font_size": 14,
"highlight_line": true,
"ignored_packages": ["Markdown", "Vintage"]
}
❯ /usr/local/bin/plugin_host
Unexpected number of arguments, expected 2
❯ /usr/local/bin/plugin_host --help
unable to open channels
Right after the startup of the app, everything seems fine, all four cores are online running in the cluster mode. Two cores have the same name, remaining two differ. Here is the ecosystem.json file:
{ 'apps': [
{
'name': 'CHS',
'script': './main.js',
'out_file': '../logs/appname.log',
'error_file': '../logs/appname.log',
'log_date_format' : 'YYYY-MM-DD HH:mm:ss Z',
'rotateModule': true,
'compress': true,
'dateFormat': 'YYYY-MM-DD',
'max_size': '10M',
'retain': 300,
'log_date_format' : 'YYYY-MM-DD HH:mm:ss Z',
'merge_logs': true,
'env': {
'NODE_ENV': 'production',
'METEOR_SETTINGS': '{ ... }'
},
'env_production': {
'NODE_ENV': 'production'
},
'instances': '1',
'exec_mode': 'cluster'
},
{
'name': 'ORC',
'script': './main.js',
'out_file': '../logs/appname.log',
'error_file': '../logs/appname.log',
'log_date_format' : 'YYYY-MM-DD HH:mm:ss Z',
'rotateModule': true,
'compress': true,
'dateFormat': 'YYYY-MM-DD',
'max_size': '10M',
'retain': 300,
'merge_logs': true,
'env': {
'NODE_ENV': 'production',
'METEOR_SETTINGS': '{ ... }'
},
'env_production': {
'NODE_ENV': 'production'
},
'instances': '1',
'exec_mode': 'cluster'
},
{
'name': 'WCR',
'script': './main.js',
'out_file': '../logs/appname.log',
'error_file': '../logs/appname.log',
'log_date_format' : 'YYYY-MM-DD HH:mm:ss Z',
'rotateModule': true,
'compress': true,
'dateFormat': 'YYYY-MM-DD',
'max_size': '10M',
'retain': 300,
'merge_logs': true,
'env': {
'NODE_ENV': 'production',
'METEOR_SETTINGS': '{ ... }'
},
'instances': '2',
'exec_mode': 'cluster'
}
]
}
Few seconds after the startup, CHS core does not execute any code for some reason (without any errors), while other three works fine. I have tried to reorder cores in ecosystem.json, add "pmx": false (per some online suggestion, to avoid some parts in pm2 itself), rename cores, and some other things but issue was the same. I was trying to make sure there is nothing in the app code that makes this issue happens, searched errors in the logs, and this is only I have (in pm2 logs):
2020-04-17T14:16:49: PM2 error: (node:3720) [DEP0007] DeprecationWarning: worker.suicide is depricated. Please use worker.exitedAfterDisconnect.
2020-04-17T14:17:01: PM2 error: Error: write ENOTSUP
at ChildProcess.target._send (internal/child_process.js:692:20)
at ChildProcess.target.send (internal/child_process.js:576:19)
at senderHelper (internal/cluster/utils.js:25:15)
at send (internal/cluster/master.js:357:10)
at handle.add (internal/cluster/master.js:329:5)
at SharedHandle.add (internal/cluster/shared_handle.js:29:3)
at queryServer (internal/cluster/master.js:318:10)
at Worker.onmessage (internal/cluster/master.js:250:5)
at ChildProcess.onInternalMessage (internal/cluster/utils.js:42:8)
at emitTwo (events.js:131:20)
2020-04-17T14:17:01 PM2 log: App name:CHS id:0 disconnected
Also, at the end this happens only on the one windows VM instance (on the second one no issues at all). These instances are identical based on the configurations. PM2 version was 4.2.3 but then I have tried with 3.5.1 and I got the same issue. Does anybody have any idea how to troubleshoot this issue?
Lately it turned out that issue is related with UDP, and probably some unsupported features in node for windows, like this:
https://nodejs.org/api/cluster.html#cluster_event_listening
The root cause was that UDP does not work in a cluster mode on PM2. Actually we were using SNMP which relies on UDP under the hood, and it was not clear we have such an issue considering the logs. Spinning up a new core in a fork mode, running SNMP resolves the issue.
I get an error from time to time when deploying nodejs10 cloud functions to GCP. The error seems to go away on it's own, I just redeploy the same thing a few times. Anyone know what causes it? He's the log:
command: gcloud beta functions deploy exchangeIcon --verbosity debug --runtime nodejs10 --memory 128 --region europe-west1 --timeout 5 --trigger-http --set-env-vars=FUNCTION_REGION=europe-west1,BUILD_DATE=2019-05-09T10:01:05.497Z --entry-point app
DEBUG: Running [gcloud.beta.functions.deploy] with arguments: [--entry-point: "app", --memory: "134217728", --region: "europe-west1", --runtime: "nodejs10", --set-env-vars: "OrderedDict([(u'FUNCTION_REGION', u'europe-west1'), (u'BUILD_DATE', u'2019-05-09T10:01:05.497Z')])", --timeout: "5", --trigger-http: "True", --verbosity: "debug", NAME: "exchangeIcon"]
INFO: Not using a .gcloudignore file.
INFO: Not using a .gcloudignore file.
Deploying function (may take a while - up to 2 minutes)...
..........................................................................failed.
DEBUG: (gcloud.beta.functions.deploy) OperationError: code=3, message=Function failed on loading user code. Error message:
Traceback (most recent call last):
File "/Users/me/Downloads/google-cloud-sdk/lib/googlecloudsdk/calliope/cli.py", line 985, in Execute
resources = calliope_command.Run(cli=self, args=args)
File "/Users/me/Downloads/google-cloud-sdk/lib/googlecloudsdk/calliope/backend.py", line 795, in Run
resources = command_instance.Run(args)
File "/Users/me/Downloads/google-cloud-sdk/lib/surface/functions/deploy.py", line 231, in Run
enable_vpc_connector=True)
File "/Users/me/Downloads/google-cloud-sdk/lib/surface/functions/deploy.py", line 175, in _Run
return api_util.PatchFunction(function, updated_fields)
File "/Users/me/Downloads/google-cloud-sdk/lib/googlecloudsdk/api_lib/functions/util.py", line 300, in CatchHTTPErrorRaiseHTTPExceptionFn
return func(*args, **kwargs)
File "/Users/me/Downloads/google-cloud-sdk/lib/googlecloudsdk/api_lib/functions/util.py", line 356, in PatchFunction
operations.Wait(op, messages, client, _DEPLOY_WAIT_NOTICE)
File "/Users/me/Downloads/google-cloud-sdk/lib/googlecloudsdk/api_lib/functions/operations.py", line 126, in Wait
_WaitForOperation(client, request, notice)
File "/Users/me/Downloads/google-cloud-sdk/lib/googlecloudsdk/api_lib/functions/operations.py", line 101, in _WaitForOperation
sleep_ms=SLEEP_MS)
File "/Users/me/Downloads/google-cloud-sdk/lib/googlecloudsdk/core/util/retry.py", line 219, in RetryOnResult
result = func(*args, **kwargs)
File "/Users/me/Downloads/google-cloud-sdk/lib/googlecloudsdk/api_lib/functions/operations.py", line 65, in _GetOperationStatus
raise exceptions.FunctionsError(OperationErrorToString(op.error))
FunctionsError: OperationError: code=3, message=Function failed on loading user code. Error message:
ERROR: (gcloud.beta.functions.deploy) OperationError: code=3, message=Function failed on loading user code.
In my Stackdriver Logging I just see INVALID_ARGUMENT, but nothing else.
The problem stems from your terminal commands not being properly formatted.
--verbosity=debug
is the proper way to type this. Also same thing with your runtime.
Unable to receive email on task failure or even using EmailOperator
Hi Guys,
I am unable to receive email from my box even after adding required parameters to send one.
Below is how my default args looks like --
default_args = {
'owner': 'phonrao',
'depends_on_past': False,
#'start_date': datetime(2019, 3, 28),
'start_date': airflow.utils.dates.days_ago(2),
'email': ['phonrao#gmail.com'],
'email_on_failure': True,
'email_on_retry': True,
'retries': 1,
'retry_delay': timedelta(minutes=5),
#'on_failure_callback': report_failure,
#'end_date': datetime(2020,4 ,1),
#'schedule_interval': '#hourly',
}
I have few HttpsOperator task in between -- those are working good and are a success, but they do not send email on error(I purposely tried to introduce an error to check if they send any email). Below is an example of my task.
t1 = SimpleHttpOperator(
task_id='t1',
http_conn_id='http_waterfall',
endpoint='/update_data',
method='POST',
headers={"Content-Type":"application/json"},
xcom_push=True,
log_response=True,
dag=dag,
)
and this is my EmailOperator task
t2 = EmailOperator(
dag=dag,
task_id="send_email",
to='phonrao#gmail.com',
subject='Success',
html_content="<h3>Success</h3>"
)
t2 >> t1
Below is the error from Logs:
[2019-04-02 15:28:21,305] {{base_task_runner.py:101}} INFO - Job 845: Subtask send_email [2019-04-02 15:28:21,305] {{cli.py:520}} INFO - Running <TaskInstance: schedulerDAG.send_email 2019-04-02T15:23:08.896589+00:00 [running]> on host a47cd79aa987
[2019-04-02 15:28:21,343] {{logging_mixin.py:95}} INFO - [2019-04-02 15:28:21,343] {{configuration.py:255}} WARNING - section/key [smtp/smtp_user] not found in config
[2019-04-02 15:28:21,343] {{models.py:1788}} ERROR - [Errno 99] Cannot assign requested address
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/airflow/models.py", line 1657, in _run_raw_task
result = task_copy.execute(context=context)
File "/usr/local/lib/python3.6/site-packages/airflow/operators/email_operator.py", line 78, in execute
mime_subtype=self.mime_subtype, mime_charset=self.mime_charset)
File "/usr/local/lib/python3.6/site-packages/airflow/utils/email.py", line 55, in send_email
mime_subtype=mime_subtype, mime_charset=mime_charset, **kwargs)
File "/usr/local/lib/python3.6/site-packages/airflow/utils/email.py", line 101, in send_email_smtp
send_MIME_email(smtp_mail_from, recipients, msg, dryrun)
File "/usr/local/lib/python3.6/site-packages/airflow/utils/email.py", line 121, in send_MIME_email
s = smtplib.SMTP_SSL(SMTP_HOST, SMTP_PORT) if SMTP_SSL else smtplib.SMTP(SMTP_HOST, SMTP_PORT)
File "/usr/local/lib/python3.6/smtplib.py", line 251, in __init__
(code, msg) = self.connect(host, port)
File "/usr/local/lib/python3.6/smtplib.py", line 336, in connect
self.sock = self._get_socket(host, port, self.timeout)
File "/usr/local/lib/python3.6/smtplib.py", line 307, in _get_socket
self.source_address)
File "/usr/local/lib/python3.6/socket.py", line 724, in create_connection
raise err
File "/usr/local/lib/python3.6/socket.py", line 713, in create_connection
sock.connect(sa)
OSError: [Errno 99] Cannot assign requested address
[2019-04-02 15:28:21,351] {{models.py:1817}} INFO - All retries failed; marking task as FAILED
Below is my airflow.cfg
[email]
email_backend = airflow.utils.email.send_email_smtp
[smtp]
# If you want airflow to send emails on retries, failure, and you want to use
# the airflow.utils.email.send_email_smtp function, you have to configure an
# smtp server here
smtp_host = localhost
smtp_starttls = True
smtp_ssl = False
# Uncomment and set the user/pass settings if you want to use SMTP AUTH
# smtp_user = airflow
# smtp_password = airflow
smtp_port = 25
smtp_mail_from = airflow#example.com
Has anyone encounter this issue and any suggestions on how do I resolve this?
If your airflow running on Kubernetes (installed by helm chart), you should take a look in "airflow-worker-0" pod, and make sure the environment variable of SMTP_HOST or SMTP_USER ... available in the config. Simply debugging, access to the container of airflow-worker and then run python, then trying these commands to make sure it works correctly.
import airflow
airflow.utils.email.send_email('example#gmail.com', 'Airflow TEST HERE', 'This is airflow status success')
I have the same issues, by resolving the environment variable of SMTP. Now it works.
I am using Couchbase Server 4.5.1.
I am trying to cbrestore a bucket, which has been backed up with cbbackup on Ubuntu, and I am getting the following exception:
/Applications/Couchbase\ Server.app/Contents/Resources/couchbase-core/bin/cbrestore . http://localhost:8091 -u Administrator -p st0ryplAyr --bucket-source=storyplayer-api --bucket-destination=storyplayer-api -v -x rehash=1
2017-06-27 09:51:02,610: mt cbrestore...
2017-06-27 09:51:02,610: mt source : .
2017-06-27 09:51:02,610: mt sink : http://localhost:8091
2017-06-27 09:51:02,610: mt opts : {'username': '<xxx>', 'verbose': 1, 'extra': {'max_retry': 10.0, 'rehash': 1.0, 'dcp_consumer_queue_length': 1000.0, 'data_only': 0.0, 'uncompress': 0.0, 'nmv_retry': 1.0, 'conflict_resolve': 1.0, 'cbb_max_mb': 100000.0, 'report': 5.0, 'mcd_compatible': 1.0, 'try_xwm': 1.0, 'backoff_cap': 10.0, 'batch_max_bytes': 400000.0, 'report_full': 2000.0, 'flow_control': 1.0, 'batch_max_size': 1000.0, 'seqno': 0.0, 'design_doc_only': 0.0, 'recv_min_bytes': 4096.0}, 'ssl': False, 'threads': 4, 'to_date': None, 'key': None, 'password': '<xxx>', 'id': None, 'bucket_source': 'storyplayer-api', 'silent': False, 'dry_run': False, 'from_date': None, 'bucket_destination': 'storyplayer-api', 'add': False, 'vbucket_list': None}
2017-06-27 09:51:02,615: mt bucket: storyplayer-api
Exception in thread s0:
Traceback (most recent call last):
File "/usr/local/Cellar/python/2.7.12_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/threading.py", line 801, in __bootstrap_inner
self.run()
File "/usr/local/Cellar/python/2.7.12_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/threading.py", line 754, in run
self.__target(*self.__args, **self.__kwargs)
File "/Applications/Couchbase Server.app/Contents/Resources/couchbase-core/lib/python/pump_mc.py", line 91, in run
rv, batch, need_backoff = self.scatter_gather(mconns, batch)
File "/Applications/Couchbase Server.app/Contents/Resources/couchbase-core/lib/python/pump_cb.py", line 72, in scatter_gather
rv, conn = self.find_conn(mconns, vbucket_id, msgs)
File "/Applications/Couchbase Server.app/Contents/Resources/couchbase-core/lib/python/pump_cb.py", line 316, in find_conn
host_port = serverList[vBucketMap[vbucket_id][0]]
IndexError: list index out of range
I had the same error (or comparable) previously which is why I am using the option rehash=1 but this time it does not help.
Any idea about what I could do?
I'm a little bit late to answer this question, but you have to add "-x rehash=1" to your command:
cbrestore ~/backups http://127.0.0.1:8091 -u Administrator -p password -x rehash=1
There reason why is that Couchbase has 1024 vbuckets on most operational systems, but on Mac there are just 64 vBuckets (for optmization purposes ), so you need to rehash to redistribute the data between all vbuckets