Setting IBM Streams instance properties and bringing that to effect without restarting the instance - infosphere-spl

I have to set some instance properties for working on consistent region projects. I set those properties, but they got reflected only when I restarted the instance, which obviously resulted in all the running jobs getting cancelled.
I have 100s of jobs running which I cannot disturb, but want to set the 'checkpointRepositoryConfiguration', etc on the instance that is required to work with consistent regions.
Is there a way I can set these instance properties without restarting the instance or if at all I have to restart the instance, is there a way I can prevent the jobs running on that instance from getting cancelled?

Sorry, we do not support dynamically updating both properties. We have a task for this enhancement, but it is not supported so far.

Related

Sql update automatically rolling back or changing after some time

I have an Azure SQL db where I am executing a change with a c# call (using await db.SaveChangesAsync();)
This works fine and I can see the update in the table, and in the APIs that I call which pull the data. However, roughly 30-40 minutes later, I run the API again and the value is back to the initial value. I check the database and see that it is indeed back to the initial value.
I can't figure out why this is, and I'm not sure how to go about tracking it down. I tried to use the Track Changes SQL command but it doesn't give me any insight into WHY the change is happening, or in what process, just that it is happening.
BTW, This is a test Azure instance that nobody has access to but me, and there are no other processes. I'm assuming this is some kind of delayed transaction rollback, but it would be nice to know how to verify that.
I figured out the issue.
I'm using an Azure Free Tier service, which is done on a shared virtual machine. When the app went inactive, it was being shut down, and restarted on demand when I issued a new request.
In addition, I had a Seed method in my Entity Framework Migration set up to set the particular record I was changing to 0, and when it restarted, it re-ran the migration, because it was configured to do so in my web config.
Simply disabling the EF Migrations and republishing does the trick (or when I upgrade to a better tier for real production, it will also go away). I verified that records outside of those expressly mentioned in the Migration Seed method were not affected by this change, so it was clearly that, and after disabling the migrations, I am not seeing it any more.

fiware spagobi cockpit graphics not upgrade

All graphics in my cockpit are not updated, even though the data source dataset is scheduled to be updated every 1 minute, and checking in the bbdd the dataset is updated correctly every 1 minute...
my dataset config:
How can I see the updated graphics? maybe needs to change something in spagobi server or configuration?
Cockpit uses a cache mechanism that permits to query datasets coming from different datasources while joining them but has nothing to do with the dataset's persistence.
At this moment, there are two ways to get updated data while using cockpit:
by cleaning the cache using the button inside the cockpit itself;
by using the cache cleaning scheduling setting.
In the latter case, enter the Configuration Management as admin to change the value for
SPAGOBI.CACHE.SCHEDULING_FULL_CLEAN
variable to HOURLY. This setting creates a job that periodically (every hour, which is the minimun) cleans the cache used by cockpits.

Amazon RDS Mysql: Allowing the use of Events and Triggers

Today I attempted to turn on my the Events Scheduler on my Amazon RDS instance.
I received the following error:
Access denied; you need (at least one of) the SUPER privilege(s) for
this operation.
I've been looking a a couple of post around the internet on how to solve this but I haven't found anything of real use. I'm not sure where to even start to figure out a solution because these posts have stated that Amazon doesn't grant SUPER privileges to anyone.
To enable the Event Scheduler on RDS you will need to specify this in a parameter group.
You will need to either create a new parameter group or modify an existing one. This can be done via the web console or, as with many AWS things, via the CLI/API/SDK.
You want to change the value of event_scheduler to either 1 or ON.
Once this has been changed you can then apply the parameter group to an existing database instance either via the console or the CLI/API/SDK.
To make the database pick up the parameter change you will need to reboot the instance.

How to perform targeted select queries on main DB instance when using Amazon MySQL RDS and Read replica?

I'm considering to use Amazon MySQL RDS with Read Replicas. The only thing disturbing me is Replica Lag and eventual inconsistency. For example, image the case when user modifies his profile (UPDATE will be performed on main DB instance) and then refreshes the page to see changed info (SELECT might be performed from Replica which has not received changes yet due to Replica Lag).
By accident, I found Amazon article which mentions its possible to perform targeted queries. For me it sounds like we can add some parameter or other to tell Amazon to execute select on the main DB instance instead of on Replica. The example with user profile is quite trivial but the same problem occurs in more realistic cases, for example checkout, when a user performs several steps and he needs to see updated info on then next screens. Yes, application could cache entire data set on its own, however it would be great if anybody knows how to perform targeted queries on main DB instance.
I read the link you referenced and didn't find any mention of "target" or anything like that.
But this line might be what you're referring to:
Otherwise, you should spread out the load and read from one of the
Read Replicas. You can make this decision on a query-by-query basis
within your application. You will probably want to maintain some sort
of registry of available Read Replicas within your application,
choosing from among them on a round-robin or randomly distributed
basis.
If so, then I interpret that line to suggest that you can balance reads in your application by just picking one server from a pool and hitting that one. But it would be all in your application logic.

Google Compute Engine gives error from when creating instance with existing boot and data disk

I originally created an instance with a persistent boot and data disk. I wanted to test that should something happen to an instance, I could just recreate one with the same boot and data disk and it would run as normal.
However, I'm getting this error when creating the instance from the developer console:
Invalid value for field 'resource.disks[1].source': 'site-data'. Must be a URL to a valid Compute resource of the correct type.
The only thing I'm doing differently is setting the boot disk to the previous site-boot disk rather than a new image, and attaching the site-data disk in read/write.
I suggest you try again -- it looks like their web-based Developer Console was broken for a few days bracketing the time you put your question in. It seems to work correctly now.
I also received this error when attempting to create an instance that included an additional Persistent Disk. Creating an instance with only the boot drive worked fine, but attempting to create an instance with any additional disk (including a new, empty disk) resulted in the same error you reported above.
I used the "Need Help?" link at the bottom left of the 'Create a new instance' web form to report the problem yesterday (10/21/14). Although I did not receive any kind of reply (I have not paid for any support options), the issue was resolved within 24 hours. I am now able to successfully create instances with additional Persistent Disks again.