Only one type of ec2 instance is getting launched in Elastic Beanstalk - amazon-elastic-beanstalk

I am using t3.micro and t3a.micro instance types for my Beanstalk with Network Out as the auto scaling trigger. Only t3.micro type instance is created. Why is no instance of type t3a.micro is getting created. The Max size of instance is set to 2. So when the condition is getting satisfied the I can see 2 instance of t3.micro. But after then when the condition is getting satisfied again no instance of t3a.micro is getting created.
I tried to reduce the max size of instance count to 1. By default the t3.micro instance is launched. So when the auto scaling trigger condition is satisfied next a new instance of t3a.micro should be created but that was not the case.

Related

MySQL/MariaDB: selected synchronize of tables between instances

I have to synchronize some tables from a MySql-Database to another (different server on different machine).
That transfer should only include special tables and only rows of that tables with a special characteristics (e.g. if a column named transfer is set to 1).
And it should be automatically/transparent, fast and work within short cycles (at least every 20s).
I tried different ways but none of them matched all requirements.
DB-synchronize with Galera works fine but does not exclude tables/rows.
mysqldump is not automatically (must be started) and does not exclude.
Is there no other way for that job beside doing it with some own code that runs permanently?
Those partial sync must be performed with specially created scheme.
Possible realization:
Test does your server instances supports The FEDERATED Storage Engine. By default this is allowed.
Test does destination server may access the data stored on source server using CREATE SERVER.
Create server attached to remote source server and needed remote database. Check that remote data is accessible.
On the destination server create an event procedure which is executed each 20s (and make it disabled firstly). I recommend you to create it in separate service database. In this event procedure execute the queries like
SET #event_start_timestamp = CURRENT_TIMESTAMP;
INSERT localDB.local_table ( {columns} )
SELECT {columns}
FROM serviceDB.federated_table
WHERE must_be_transferred
AND created_at < #event_start_timestamp;
UPDATE serviceDB.federated_table
SET must_be_transferred = FALSE
WHERE must_be_transferred
AND created_at < #event_start_timestamp;
Destination server sends according SELECT query to remote source server. It executes this query and sends the output back. Received rows are inserted. Then destination server sends UPDATE which drops the flag.
Enable Event scheduler.
Enable event procedure.
Ensure that your event procedure is executed fast enough. It must finish its work before the next firing. I.e. execute your code as regular stored procedure and check its execution time. Maybe you'd increase sheduling regularity time.
You may exclude such parallel firings using static flag in some service table created in your service database. If it is set (previous event have not finished its work) then event procedure 'd exit. I recommend you to perform this check anycase.
You must handle the next situation:
destination receives a row;
source updates this row;
destination marks this row as synchronized.
Possible solution.
The flag must_be_transferred should be not boolean but (unsigned tiny)integer, with the next values: 0 - not needed in sync; 1 - needed in sync; 2 - selected for copying, 3 - selected for copying but altered after selection.
Algo:
dest. updates the rows marked with non-zero value and set them to 2;
dest. copies the rows using the condition flag OR 2 (or flag IN (2,3));
dest. clears the flag using the expression flag XOR 2 and above condition.
src. marks altered rows as needed in sync using the expression flag OR 1.

How to ristrict Slave port's visiliblity when creating new AXI4 Slave port?

I want to create an AXI4Slave port using CanHaveSlaveAXI4Port and CanHaveSlaveAXI4PortModuleImp. I was successfully able to create those. But now I want to restrict the port so it can only access the defined memory region. I know I can do that when I create a TileLink Slave port using visibility parameter in TLClientParameters. But I don't see any related variable in AXI4MasterPortParameters. I also took a look at AXI4ToTLNode which is where TLClientPortParameters parameters are evaluated from AXI4MasterPortParameters parameters and there is no relation of visibility defined there.
So is there any way I can restrict the AXI4 Slave port to access the defined memory region only? or add it to dFn in AXI4ToTLNode.

AWS-RDS Max Allowed Packet Value Cant Be Changed

I have a MySQL database in Amazon RDS setup right now that needs to be able to act as a database and also be able to store some flat files.
It was working just fine for a while until I noticed it wasn't storing anything over 1MB... and I couldn't figure out why. So I dove deeper into RDS and learned about parameter groups. It seems to be a subset of configurations for the database itself, and so I figured it was the max_allowed_packet value was the problem and I set it to a higher value.
However, I was still unable to make uploads over 1MB so then I realized there was another parameter by the name of mysqlx_max_allowed_packet and its value is set to about 1MB, but I am unable to change it.
Does anyone have any idea how to get around this or if it is possible?
I hope these steps help.
Go to your RDS Dashboard and click Parameter Groups.
Click Create DB Parameter Group, name it something like 'LargeImport', (making sure the DB Parameter Group Family you select matches your instance version) and edit the parameters.
Increase the 'max_allowed_packet' on 'LargeImport' to accommodate your import size (Valid values are 1024-1073741824).
Increase the 'wait_timeout' parameter to accommodate your import size. (Valid values are 1-31536000 seconds).
Save your changes.
Click Instances in the left column and select your instance.
Click Instance Actions and choose Modify.
Change the Parameter Group to your new 'LargeImport' group and click Continue.
Click 'Modify DB Instance'.
Once the change has completed, click Instance Actions again and reboot your instance.
Once your instance has rebooted, you should be able to do larger SQL imports.
Once you've completed your import, switch your instance parameter group back to the default parameter group and reboot it again.
I reccomend you to test if your change take effect , so go on mysqlworkbench on your mysql instance and launch the query :
show variables like 'max_allowed_packet';
If it isn't then you can start change it to 64 MB for example ( tune the parameter to your requirements but take in mind that 1GB for aws is the max limitation). Remember also after modify RDS instance you should reboot to apply your changes.

Update remote database based on conditional statement

I have a MySQL database with a REST API for my main application hosted on Azure. I am setting up a hardware sensor with an additional database that will capture data multiple times a second. When a value changes by a specific threshold of the current value or after a specific time interval I want to make an API call to update the main database.
ie) Threshold is 10%. Last value was 10 this value is 12; this will set a trigger to call API and add to main database.
Can a trigger be added to the second database to make a HTTP request? Is there benefit to using another RDBMS in this case instead of MySQL? Does PubNub/Firebase make sense in this situation?

Creating MySQL Events in Amazon RDS

I'm trying to create a MySQL Event on an RDS database. It took me a bit to figure out that I needed to change the DB Parameters and get the scheduler started. However, even with the scheduler running (I see is running in SHOW PROCESSLIST), I am still getting "ERROR 1044 (42000): Access denied for user..." when I create an event. I tried posting on the AWS discussion boards, but nothing.
Has anyone created a MySQL event in an AWS RDS instance? If so, what I am not doing, or what am I missing to get it created?
I'm using the Master User account so I suspect it has to be another DB Parm I havent set (I suspect).
You have to create a parameter group for your instance.
Go to your RDS Dashboard and click parameters on the left.
You should see a list of parameter groups, if you only see "default" groups, then you need to create a new group. (see 1a). If you already have a custom parameter group, skip to 1b.
1a. Click create parameter at the top, make sure you select the appropriate MySql version you're DB is using (found on the instances dashboard). Give it a name and click "yes, create". (also do 1c).
1b. Click the magnifying glass in the row where your parameter group is and it will take you to the details page.
On the details page look at the bottom and you will see "Filter:" in the search box type "Event". Let the table filter and then click "Edit Parameters". In the list below you want to change the "values" column for "event_scheduler" and type "ON" in the box.
If you originally started with a parameter group you're good to go, you can head over to your instances dashboard to see that it's applying your parameter group changes. If you created your parameter group, continue on.
Warning! The next step requires a reboot!
1c. You need to apply your parameter group to your DB instance. Click instances on the left and then select the DB you want to apply the parameter group to. At the top you want to click "Instance Actions" and then "Modify".
Change the "Parameter Group" selection to be the new parameter group you created. Click continue at the bottom of the page, then modify db instance on the next page. You now need to reboot your server, select "Instance Actions" then "Reboot".