How can I remove a node from my galera cluster? - mysql

Is there any better way of doing it,apart from setting 'wsrep_cluster_address='gcomm://' for each node that I want to remove?

I just did this. Seems to have worked. On the node you want to evict
Try
>show global status like 'wsrep%';
Copy paste the wsrep_gcomm_uuid
Go to another node and evict from there assuming the UUID = 1de97dad-f609-11e5-8a50-ce2e621b0c42
SET GLOBAL wsrep_provider_options="evs.evict=1de97dad-f609-11e5-8a50-ce2e621b0c42";
If the node is already shut down or non-responsive you can get the UUID from any other node from wsrep_evs_delayed

I see two choices here:
http://www.severalnines.com/blog/online-schema-upgrade-mysql-galera-cluster-using-rsu-method
(You aren't doing RSU, but that involves "removing a node".)

Related

openshift inventory file define a node as a master node and infra-node

Question 1 : is it possible to define a node as a master node and an infra-node ? and if possible how is this done in the inventory file ?
Question 2 : do we have restrictions on where to put etcd , registries , routers ? or can we put them in the master nodes or infra-nodes ?
Here's default Node Group Definition for openshift 3.11. "node-config-master-infra" is for your question 1. Question 2: node's capacity and dns-infrastructure are your restrictions.
In addition, Here's all in one node example.
Moreover, I suggest to check system/environment requirements.

Undesired behavior when reloading modified nodes into aws-neptune

I'm using the bulk loader to load data from csv files on S3 into a Neptune DB cluster.
The data is loaded successfully. However, when I reload the data with some of the nodes' property values modified, the new value is not replacing the old one, but rather being added to it ,making it a list of values separated by a comma. For example:
Initial values loaded:
~id,~label,ip:string,creationTime:date
2,user,"1.2.3.4",2019-02-13
If I reload this node with a different ip:
2,user,"5.6.7.8",2019-02-13
Then I run the following traversal: g.V(2).valueMap(), and getting: ip=[1.2.3.4, 5.6.7.8], creationTime=[2019-02-13]
While this behavior may be beneficial for some use-cases, it's mostly undesired. I want the new value to replace the old one.
I couldn't find any reference in the documentation to the loader behavior in case of reloading nodes, and there is no relevant parameter to configure in the API request.
How can I have reloaded nodes overwriting the existing ones?
Update: Neptune now supports single cardinality bulk-loading. Just set
updateSingleCardinalityProperties = TRUE
SOURCE: https://docs.aws.amazon.com/neptune/latest/userguide/load-api-reference-load.html
currently the Neptune bulk loader uses Set cardinality. To update an existing property the best way is to use Gremlin via the HTTP or WS endpoint.
From Gremlin you can specify that you want single cardinality (thus replacing rather than adding to the property value). An example would be
g.V('2').property(single,"ip","5.6.7.8")
Hope that helps,
Kelvin

Auto delete disk using libcloud

i'm trying to create a VM using libcloud with auto delete feature. The thing is that it only works for boot disks.
Example:
new_node = driver.create_node("my_node_str", size, get_root_snapshot(driver), location,ex_service_accounts=sa_scopes, ex_disk_auto_delete=True, ...
Then I attach a disk:
driver.attach_volume(my_node,...,ex_boot=False, ex_auto_delete=True)
So i go to GCE and this volume auto delete is turned OFF
So, i try to change it "manually" using libcloud:
conn.ex_set_volume_auto_delete(vol, node)
And I get the error :
libcloud.common.google.GoogleBaseError: u"Invalid value for field 'disk': 'myvolume1-worker-disk'
But the disk is created, attached and it is working on my VM.
Debugging libloud everything seems to be ok acording to documentation (https://cloud.google.com/compute/docs/reference/latest/instances/setDiskAutoDelete):
It calls:
u'/zones/us-central1-b/instances/myinstancename/setDiskAutoDelete'
With parameters:
'deviceName': volume.name, 'autoDelete': auto_delete,
any clues?
It looks like there may be a bug with attach_volume. I'll do a bit of testing and get that fixed up if so.
Regarding using ex_set_volume_auto_delete, you need to pass in a StorageVolume object. It looks like you are just passing in a string (the name of the disk).
You could try,
disk_obj = driver.ex_get_volume('string-name-of-disk')
driver.ex_set_volume_auto_delete(node_obj, disk_obj, ex_auto_delete=True)
I'll follow up about the first issue when I look into it more.

FBLOG_TRACE() No logging to Logfile -- FBLOG_INFO() logging OK -- What is the DIFFERENCE

FIREBREATH 1.6 -- VC2010 --
No logging with FBLOG_TRACE("StaticInitialize()", "INIT-trace");
settings
outMethods.push_back(std::make_pair(FB::Log::LogMethod_File, "U:/logs/PT.log"));
...
FB::Log::LogLevel getLogLevel(){
return FB::Log::LogLevel_Trace;
...
changing "FBLOG_TRACE" to "FBLOG_INFO" logging to Logfile works. I don´t understand the reason.
function not inserted in its respective area
FB::Log::LogLevel getLogLevel(){
return FB::Log::LogLevel_Trace; // Now Trace and above is logged.
}
Discription Logging here.
Enabling logging
...
regenerate your project using the prep* scripts
open up Factory.cpp in your project. You need to define the following function inside the class definition for PluginFactory:
...
About log levels
...
If you want to change the log level, you need to define the following in your Factory.cpp:
Referring to the above that means somewhere in "Factory.cpp". that´s incorrect. The description should say -->
If you want to change the log level, you need to define the following function inside the class definition for PluginFactory:
I drag it from bottom of "Factory.cpp" to inside Class PluginFactory.
Now it works as expected !!!
The entire purpose of having different log levels (FBLOG_FATAL, FBLOG_ERROR, FBLOG_WARN, FBLOG_INFO, FBLOG_DEBUG, FBLOG_TRACE) is so that you can configure which level to use and anything below that level is hidden. The default log level in FireBreath is FB::Log::LogLevel_Info, which means that nothing below INFO (such as DEBUG or TRACE) will be visible.
You can change this by overriding FB::FactoryBase::getLogLevel() in your Factory class to return FB::Log::LogLevel_Trace.
The method you'd be overriding is: https://github.com/firebreath/FireBreath/blob/master/src/PluginCore/FactoryBase.cpp#L78
The definition of the LogLevel enum:
https://github.com/firebreath/FireBreath/blob/master/src/ScriptingCore/logging.h#L69
There was a version of FireBreath in which this didn't work; I think it was fixed by 1.6.0, but I don't remember for certain. If that doesn't work try updating to the latest on the 1.6 branch (which is currently 1.6.1 as of the time of this writing but I haven't found time to release yet)

How can I start multiple Tornado Server instances in multiple ports

I need to start the blog demo in the following ports:
127.0.0.1:8000
127.0.0.1:8001
127.0.0.1:8002
127.0.0.1:8003
When I run the application using:
./demos/blog/blog.py
it starts in port 8888 as defined by:
define("port", default=8888, help="run on the given port", type=int)
How do I run multiple instances in multiple ports?
I found what I was looking for:
./demos/blog/blog.py --port=8889
you can register several ports when creating a handler
application = tornado.web.Application([
(r".*", MainHandler),
], **app_settings)
application.listen(8080)
application.listen(8081)
Make sure you know, the --port option gets parsed by the options module of the Tornado framework.
The lines that looks like this:
define("port", default=8888, help="Port to listen on", type=int)
and later there's a call to the options module that parses command line vars automatically.
I'm just giving you this because you might want to later specify different variables in your programs that you design around the framework that you may want to change instance to instance.
Use supervisord to start multiple instances. Since each app takes the --port= argument you can set something like this up:
Here's the setup I use for Around The World
[group:aroundtheworld]
programs=aroundtheworld-10001,aroundtheworld-10002,aroundtheworld-10003
[program:aroundtheworld-10001]
command=/var/lib/tornado/aroundtheworld/app.py --port=10001
directory=/var/lib/tornado/aroundtheworld/
autorestart=true
redirect_stderr=true
stdout_logfile=/var/log/tornado/aroundtheworld-10001.log
stdout_logfile_maxbytes=500MB
stdout_logfile_backups=50
stdout_capture_maxbytes=1MB
stdout_events_enabled=false
loglevel=warn
[program:aroundtheworld-10002]
command=/var/lib/tornado/aroundtheworld/app.py --port=10002
directory=/var/lib/tornado/aroundtheworld/
autorestart=true
redirect_stderr=true
stdout_logfile=/var/log/tornado/aroundtheworld-10002.log
stdout_logfile_maxbytes=500MB
stdout_logfile_backups=50
stdout_capture_maxbytes=1MB
stdout_events_enabled=false
loglevel=warn
[program:aroundtheworld-10003]
command=/var/lib/tornado/aroundtheworld/app.py --port=10003
directory=/var/lib/tornado/aroundtheworld/
autorestart=true
redirect_stderr=true
stdout_logfile=/var/log/tornado/aroundtheworld-10003.log
stdout_logfile_maxbytes=500MB
stdout_logfile_backups=50
stdout_capture_maxbytes=1MB
stdout_events_enabled=false
loglevel=warn
If you need help with how to set up Nginx or something similar to load balance across these then submit a new question.
copy /demos/blog/blog.py to blog_otherports.py
change posts in blog_otherports.py
and python blog_otherports.py
you need to run two processes