Creating MySQL Events in Amazon RDS - mysql

I'm trying to create a MySQL Event on an RDS database. It took me a bit to figure out that I needed to change the DB Parameters and get the scheduler started. However, even with the scheduler running (I see is running in SHOW PROCESSLIST), I am still getting "ERROR 1044 (42000): Access denied for user..." when I create an event. I tried posting on the AWS discussion boards, but nothing.
Has anyone created a MySQL event in an AWS RDS instance? If so, what I am not doing, or what am I missing to get it created?
I'm using the Master User account so I suspect it has to be another DB Parm I havent set (I suspect).

You have to create a parameter group for your instance.
Go to your RDS Dashboard and click parameters on the left.
You should see a list of parameter groups, if you only see "default" groups, then you need to create a new group. (see 1a). If you already have a custom parameter group, skip to 1b.
1a. Click create parameter at the top, make sure you select the appropriate MySql version you're DB is using (found on the instances dashboard). Give it a name and click "yes, create". (also do 1c).
1b. Click the magnifying glass in the row where your parameter group is and it will take you to the details page.
On the details page look at the bottom and you will see "Filter:" in the search box type "Event". Let the table filter and then click "Edit Parameters". In the list below you want to change the "values" column for "event_scheduler" and type "ON" in the box.
If you originally started with a parameter group you're good to go, you can head over to your instances dashboard to see that it's applying your parameter group changes. If you created your parameter group, continue on.
Warning! The next step requires a reboot!
1c. You need to apply your parameter group to your DB instance. Click instances on the left and then select the DB you want to apply the parameter group to. At the top you want to click "Instance Actions" and then "Modify".
Change the "Parameter Group" selection to be the new parameter group you created. Click continue at the bottom of the page, then modify db instance on the next page. You now need to reboot your server, select "Instance Actions" then "Reboot".

Related

AWS-RDS Max Allowed Packet Value Cant Be Changed

I have a MySQL database in Amazon RDS setup right now that needs to be able to act as a database and also be able to store some flat files.
It was working just fine for a while until I noticed it wasn't storing anything over 1MB... and I couldn't figure out why. So I dove deeper into RDS and learned about parameter groups. It seems to be a subset of configurations for the database itself, and so I figured it was the max_allowed_packet value was the problem and I set it to a higher value.
However, I was still unable to make uploads over 1MB so then I realized there was another parameter by the name of mysqlx_max_allowed_packet and its value is set to about 1MB, but I am unable to change it.
Does anyone have any idea how to get around this or if it is possible?
I hope these steps help.
Go to your RDS Dashboard and click Parameter Groups.
Click Create DB Parameter Group, name it something like 'LargeImport', (making sure the DB Parameter Group Family you select matches your instance version) and edit the parameters.
Increase the 'max_allowed_packet' on 'LargeImport' to accommodate your import size (Valid values are 1024-1073741824).
Increase the 'wait_timeout' parameter to accommodate your import size. (Valid values are 1-31536000 seconds).
Save your changes.
Click Instances in the left column and select your instance.
Click Instance Actions and choose Modify.
Change the Parameter Group to your new 'LargeImport' group and click Continue.
Click 'Modify DB Instance'.
Once the change has completed, click Instance Actions again and reboot your instance.
Once your instance has rebooted, you should be able to do larger SQL imports.
Once you've completed your import, switch your instance parameter group back to the default parameter group and reboot it again.
I reccomend you to test if your change take effect , so go on mysqlworkbench on your mysql instance and launch the query :
show variables like 'max_allowed_packet';
If it isn't then you can start change it to 64 MB for example ( tune the parameter to your requirements but take in mind that 1GB for aws is the max limitation). Remember also after modify RDS instance you should reboot to apply your changes.

How do I undo "filter to this schema" in MySQL workbench?

If you right click on a schema in MySQL workbench, the second context menu item is "filter to this schema". I meant to choose the first item, "Set as default schema" and missed. Now I can't see any other schemas. I tried selecting it again in case it toggles but it doesn't. Google comes back with nothing. If I reload workbench I get my schemas back, but is there a less drastic option or is this feature a one-way ticket?
When you right-click on a database (schema) and click on "Filter to This Schema", all it does is enter the name of that database into the Schema filter box above.
Anytime a string is entered in that box, it filters the view to match the entered string.
To see all the databases again, just click the icon or backspace out any characters entered. That will remove the filter from the view of databases.
Just Restart your Mysql database and server.
it will show all the schema and show all options as it is like before.

MySQL for Excel Add-In, failing to append

I am using the MySQL for Excel add-in. I have been using this for months to highlight a set of data and load it into my database.
I added a new table last week, and was successfully loading data into it using the same method. It stopped working. When I attempt to Append data, I get an error dialogue box that says "Cannot Find Column 30"...which happens to be the number (and therefore) last column in my table.
Some more information:
- If I highlight this same set of data and try to write it to another table, it will allow me to (even though I don't confirm and do it). I get past the error dialogue box to the Mapping dialogue box.
- I tried writing data from the previous table to this new problem table. I got the same error dialogue box.
This tells me that the problem is not with the data set. It is with the table.
I then deleted the table, all the connections and redid it from scratch. Still the same problem.
Where is this failing? Thank you.
ya even i had the same problem, even i tried the same way u did, like deleted the table recreated everything, it did not work, finally i tried to append some random table and went to "Advanced Options" at the bottom of the append pop up
in this u will see some "Stored Column mappings" and some mappings in it, now delete all of them and hit "Accept" now u can append the table
it will work
I know this is super old but I have a solution because it just happened to me.
Try append on different table. When box pops up click 'Advanced Options'
Then under column mapping uncheck both:
(Automatically store the column mapping for the given table) AND
(Reload stored column mapping for the select table automatically)
Then goto the Stored mappings tab and delete all the stored mappings
Click Accept and then rejoice you are free of that error for life.
best -J
If Excel does not show you the "Stored Column mappings" then connect to a MySQL table not used before in Excel, then retry the Append, the mapping will be shown. Then you can follow the answer provided by user3611272.
Problem will be resolved by doing below steps:
Select any table from Table other than your target table.
Click on Append Data
Press "Advanced Options"
Select tab "Stored Mappings"
Delete your target table mapping and Accept.
it will solve the issue.

Which SSIS System Variable holds error text

I am running a SSIS package using SQL Server 2008 Job. The package crash at some point while running. I have created my own mechanism to grab the error and record it in a table. So I can see that there is an error with an specific task, but could not find what the error is.
When I run the same package from BIDS, it works perfect. no error.
What I want to do is, I need to write that error string to my own table which shown in the "Execution Result" tab.
So the question is which system variable holds the error string in SSIS.
The error is stored in the ErrorDescription system variable. See Handling Errors in the Data Flow for an example of how to get the error description.
Also, if you want to capture error information into a table, SSIS supports logging to a table using the SQL Server Log Provider. You can also customize the logging.
Too easy.
Left-Click (highlight) on the object you want to capture the error event (Script, or Data Flow, etc.)
Click on 'Event Handlers' - screen should open with Executable = object you clicked and Event Handler = OnError
Click URL (click here to create....)
Drag Execute SQL object from SSIS Toolbox
Configure to the database/table you want to house the error message
Write INSERT INTO DB.Schema.Table(DBName, SchemaName, TableName,ErrorMessage,DateAdded)
Write VALUES (?,?,?,'I am smart',getdate())
Click Parameters and select the USER::Variables for the ?'s + my comment.
Since this is ran at the database server it will pass in the ?'s. My SAC is already at the database as a value but you will have selected System::ErrorDescription as parameter 3. Remember, this array is 0 based. DO NOT TRY TO NAME THE PARAMETERS. Instead, number them 0 to ~? The datatypes are based on what you have going in; mine are all VARCHAR so... :)
This is a much better solution than just logging whatever the server allows you to.
I can also add a counter variable and adjust it wherever I like; then pass it to the event OnError. This will allow me to pinpoint exactly where the last successful object completed; works best in scripting objects but also available in other areas.
I'm using this so I can process thousands of cycles without actually failing the package. If a table doesn't exist or a column doesn't exist I simply log it for further review later. Oh yeah, I'm cycling through hundreds of databases capturing their architecture and maximum column size used; not to be confused with maximum column size.
Example: TelephoneNumber comes from a source column of char(500) (definitely bad programming but...you can't change everything so..). I capture the max len of that column and adjust the destination column to accommodate that size +/- a certain percentage.
If a table doesn't exist or a column doesn't exist anymore I log the error and keep churning. At the end, I can evaluate those entries and see if I can actually remove them from my warehouse. This happens more in the TEST and STAGE environments than in PROD. However, when a change goes through to PROD I most definitely will identify it as it's coming in to the warehouse.
Everything is configured, this includes dynamic MERGE/JOINs, INSERT, SELECT, ELEMENTS, SIZES, USAGESIZE, IDENTITY, SOURCEORDER, etc. with conversions of data to destination datatypes.
ALL that because the systemic version of logging will not provide you with the granularity you might need for this type of operation. This OnError Event Handler can if setup properly.
Check this out! He has explained with a Step by step process on how to configure SSIS logging which has the error message parameter.

How do I get access to the mysql logs from a RDS instance

How do I get access to the MySQL logs (primarily to take a look at the insert/update/delete statements) from an Amazon RDS instance?
Basically you have to enable the "general_log" parameter in the parameter group of your RDS instance
$ rds-modify-db-parameter-group mydbparametergroup --parameters "name=general_log,value=ON,method=immediate"
In case you did not apply the parametergroup to the instance:
$ rds-modify-db-instance mydbinstance --db-parameter-group-name mydbparametergroup
Then access your mysql instance using root:
mysql> select * from mysql.general_log;
See:
AWS Developer Forum - Re: general query log
AWS RDS - Working with DB Parameter Groups
EDIT: 4 years have past since I posted this answer, and it still seems valid. I hope someone from Amazon RDS documentation team would read it and update their documentation.
I had a really hard time to figure such a simple thing out, because all online information in this regard seems outdated including one in Amazon Docs. Amazon has obviously changed how you do things since now the default parameters cannot be modified, and you need to create a custom set of parameters in order to modify them, including general_log. It is an obvious bug that you can still click the Edit button for default parameters, but when you try to save them, you get an error that default parameters can't be changed.
How you do it now, is that in the Parameters Groups, click on Create DB Parameter Group, and create a new group and select the same DB in 'DB Parameter Group Family' as in the default parameter group. See the attached screen shot. Once done, it'll create a copy identical to the default parameter group. Now edit the parameters, e.g. change general_log to '1'. According to the Docs is should be '0' by default but it is neither '0' nor '1' by default.
Now save it, go back to your instance, click on 'Instance Actions', select 'Modify' and in the setting which will appear, change 'Parameter Group' to your new custom parameter group. It'll take a few moments to apply it, after which you'll need to restart your DB instance.
This is how it is till June 2014. But there is no guarantee that it'll stay like this in future too, since in the technology industry things keep getting updated too fast (many times unnecessarily) but documents and tutorials don't get updated as fast.