Auto delete disk using libcloud - google-compute-engine

i'm trying to create a VM using libcloud with auto delete feature. The thing is that it only works for boot disks.
Example:
new_node = driver.create_node("my_node_str", size, get_root_snapshot(driver), location,ex_service_accounts=sa_scopes, ex_disk_auto_delete=True, ...
Then I attach a disk:
driver.attach_volume(my_node,...,ex_boot=False, ex_auto_delete=True)
So i go to GCE and this volume auto delete is turned OFF
So, i try to change it "manually" using libcloud:
conn.ex_set_volume_auto_delete(vol, node)
And I get the error :
libcloud.common.google.GoogleBaseError: u"Invalid value for field 'disk': 'myvolume1-worker-disk'
But the disk is created, attached and it is working on my VM.
Debugging libloud everything seems to be ok acording to documentation (https://cloud.google.com/compute/docs/reference/latest/instances/setDiskAutoDelete):
It calls:
u'/zones/us-central1-b/instances/myinstancename/setDiskAutoDelete'
With parameters:
'deviceName': volume.name, 'autoDelete': auto_delete,
any clues?

It looks like there may be a bug with attach_volume. I'll do a bit of testing and get that fixed up if so.
Regarding using ex_set_volume_auto_delete, you need to pass in a StorageVolume object. It looks like you are just passing in a string (the name of the disk).
You could try,
disk_obj = driver.ex_get_volume('string-name-of-disk')
driver.ex_set_volume_auto_delete(node_obj, disk_obj, ex_auto_delete=True)
I'll follow up about the first issue when I look into it more.

Related

How to interpret tcl command in openOCD manual

I'm completely new to tcl and am trying to understand how to script the command "adapter usb location" in openOCD.
From the openOCD manual, the command has this description:
I want to point it to the port with the red arrow below:
Thanks.
It's not 100% clear, but I would expect (from that snippet of documentation) a bus location to be a dotted “path” something like:
1-6
where the values are:
1 — Bus ID
6 — Port ID
Which would result in a call to the command being done like this:
adapter usb location 1-6
When there's a more complex structure involved (internally because of chained hubs) such as with the item above the one you pointed at, I'd instead expect:
1-5.3
Notice that there are is a sequence of port IDs (5.3) in there to represent the structure. The resulting call would then be:
adapter usb location 1-5.3
Now for the caveats!
I can't tell what the actual format of those IDs is. They might just be numbers, or they might have some textual prefix (e.g., bus1-port6). Those text prefixes, if present, might contain a space (or other metacharacter) which will be deeply annoying to use if true. You should be able to run adapter usb location without any other arguments to see what the current location is; be aware though that it might return the empty string (or give an error) if there is no current location. I welcome feedback on this, as that information appears to be not present in any online documentation I can find (and I don't have things installed so I can't just check).
I also have no idea what (if anything) to do with the device and interface IDs.

Undesired behavior when reloading modified nodes into aws-neptune

I'm using the bulk loader to load data from csv files on S3 into a Neptune DB cluster.
The data is loaded successfully. However, when I reload the data with some of the nodes' property values modified, the new value is not replacing the old one, but rather being added to it ,making it a list of values separated by a comma. For example:
Initial values loaded:
~id,~label,ip:string,creationTime:date
2,user,"1.2.3.4",2019-02-13
If I reload this node with a different ip:
2,user,"5.6.7.8",2019-02-13
Then I run the following traversal: g.V(2).valueMap(), and getting: ip=[1.2.3.4, 5.6.7.8], creationTime=[2019-02-13]
While this behavior may be beneficial for some use-cases, it's mostly undesired. I want the new value to replace the old one.
I couldn't find any reference in the documentation to the loader behavior in case of reloading nodes, and there is no relevant parameter to configure in the API request.
How can I have reloaded nodes overwriting the existing ones?
Update: Neptune now supports single cardinality bulk-loading. Just set
updateSingleCardinalityProperties = TRUE
SOURCE: https://docs.aws.amazon.com/neptune/latest/userguide/load-api-reference-load.html
currently the Neptune bulk loader uses Set cardinality. To update an existing property the best way is to use Gremlin via the HTTP or WS endpoint.
From Gremlin you can specify that you want single cardinality (thus replacing rather than adding to the property value). An example would be
g.V('2').property(single,"ip","5.6.7.8")
Hope that helps,
Kelvin

Working on migration of SPL 3.0 to 4.2 (TEDA)

I am working on migration of 3.0 code into new 4.2 framework. I am facing a few difficulties:
How to do CDR level deduplication in new 4.2 framework? (Note: Table deduplication is already done).
Where to implement PostDedupProcessor - context or chainsink custom? In either case, do I need to remove duplicate hashcodes from the list or just reject the tuples? Here I am also doing column updating for a few tuples.
My file is not moving into archive. The temporary output file is getting generated and that too empty and outside load directory. What could be the possible reasons? - I have thoroughly checked config parameters and after putting logs, it seems correct output is being sent from transformer custom, so I don't know where it is stuck. I had printed TableRowGenerator stream for logs(end of DataProcessor).
1. and 2.:
You need to select the type of deduplication. It is not a big difference if you choose "table-" or "cdr-level-deduplication".
The ite.businessLogic.transformation.outputType does affect this. There is one Dedup only. You can not have both.
Select recordStream for "cdr-level-deduplication", do the transformation to table row format (e.g. if you like to use the TableFileWriter) in xxx.chainsink.custom::PostContextDataProcessor.
In xxx.chainsink.custom::PostContextDataProcessor you need to add custom code for duplicate-handling: reject (discard) tuples or set special column values or write them to different target tables.
3.:
Possibly reasons could be:
Missing forwarding of window punctuations or statistic tuple
error in BloomFilter configuration, you would see it easily because PE is down and error log gives hints about wrong sha2 functions be used
To troubleshoot your ITE application, I recommend to enable the following debug sinks if checking the StreamsStudio live graph is not sufficient:
ite.businessLogic.transformation.debug=on
ite.businessLogic.group.debug=on
ite.businessLogic.sink.debug=on
Run a test with a single input file only and check the flow of your record and statistic tuples. "Debug sinks" write punctuations markers also to debug files.

How can I remove a node from my galera cluster?

Is there any better way of doing it,apart from setting 'wsrep_cluster_address='gcomm://' for each node that I want to remove?
I just did this. Seems to have worked. On the node you want to evict
Try
>show global status like 'wsrep%';
Copy paste the wsrep_gcomm_uuid
Go to another node and evict from there assuming the UUID = 1de97dad-f609-11e5-8a50-ce2e621b0c42
SET GLOBAL wsrep_provider_options="evs.evict=1de97dad-f609-11e5-8a50-ce2e621b0c42";
If the node is already shut down or non-responsive you can get the UUID from any other node from wsrep_evs_delayed
I see two choices here:
http://www.severalnines.com/blog/online-schema-upgrade-mysql-galera-cluster-using-rsu-method
(You aren't doing RSU, but that involves "removing a node".)

Use Mysql in dev/prod and H2 in test

Using the play framework 2.1, I'm trying to find the best way to have two different databases configurations:
One to run my application based on mysql
One to test my application based on H2
While it is very easy to do one or the other, I run into the following problems when I try to do both:
I cannot have the same database evolutions because there are some mysql specific commands that do not function with H2 even in mysql mode: this means two sets of evolutions and two separate database names
I'm not certain how to override the main application.conf file by another reserved to test in test mode. What I tried (passing the file name or overriding keys from the command line) seems to be reserved to prod mode.
My question: Can anyone recommend a good way to do both (mysql all the time and only H2 in test) without overly complicating running the application? Google did not help me.
Thanks for your help.
There are a couple of tricks you might find useful.
Firstly, MySQL's /*! */ notation allows you to add code which MySQL will obey, but other DBs will ignore, for example:
create table Users (
id bigint not null auto_increment,
name varchar(40)
) /*! engine=InnoDB */
It's not a silver bullet, but it'll let you paper over some of the differences between MySQL and H2's syntax. It's a MySQL-ism, so it won't help with other databases, but since most other databases aren't as quirky as MySQL, you probably wouldn't need it - we migrated our database from MySQL to PostgreSQL, which doesn't support the /*! */ notation, but PostgreSQL is similar enough to H2 that we didn't need it.
If you want to use a different config for dev and prod, you're probably best off having extra config for prod. The reason for this is that you'll probably start your dev server with play run, and start your prod server with play stage; target/start. target/start can take a -Dconfig.resource parameter. For example, create an extra config file prod.conf for prod that looks like:
include "application.conf"
# Extra config for prod - this will override the dev values in application.conf
db.default.driver=...
db.default.url=...
...
and create a start_prod script that looks like:
#!/bin/sh
# Optional - you might want to do this as part of the build/deploy process instead
#play stage
target/start -Dconfig.resource=prod.conf
In theory, you could do it the other way round, and have application.conf contain the prod conf, and create a dev.conf file, but you'll probably want a script to start prod anyway (you'll probably end up needing extra JVM/memory/GC parameters, or to add it to rc.d, or whatever).
Using different database engines is probably worst possible scenario, as you wrote yourself : differences in some functions, reserved keywords etc. causes that you need to write sometimes custom statements very specific for selected DB engine. Better use two separate databases using the same engine.
Unfortunately I don't know issues with config overriding, so if default ways for overriding configs fails... override id in application.conf - so you'll be able to comment whole block fast...)
Here's how to use in memory database for tests:
public class ApplicationTest extends WithApplication {
#Before
public void setup() {
start(fakeApplication(inMemoryDatabase("default-test"), fakeGlobal()));
}
/// skipped ....
}
inMemoryDatabase() will use H2 driver by default.
You can find more details in source code