I want to automate the installation and configuration of a mysql server using azure cli.
The installation works well using azure mysql server create, however the configuration using azure mysql server configuration set -n time_zone --value Europe/Paris fails due to the following error:
Deployment failed. Correlation ID: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx. The value 'Europe/Paris' for configuration 'time_zone' is not valid. The allowed values are '[+|-][0]{0,1}[0-9]:[0-5][0-9]|[+|-][1][0-2]:[0-5][0-9]|SYSTEM'.
As I read in the mysql docs I could enable named time zones executing the following sql SET GLOBAL time_zone = timezone;, but unfortunately my user would need super privilege for this to succeed and this is impossible in azure.
The other approach would be to run mysql_tzinfo_to_sql but this is not available using azure cli.
Is there any other way to activate named time zones?
From the Azure DB for MySQL documentation:
Populating the time zone tables
The time zone tables on your server can be populated by calling the mysql.az_load_timezone stored procedure from a tool like the MySQL command line or MySQL Workbench.
CALL mysql.az_load_timezone();
Also, in this doc (you linked to in your question):
Upon initial deployment, an Azure for MySQL server includes systems tables for time zone information, but these tables are not populated. The time zone tables can be populated by calling the mysql.az_load_timezone stored procedure from a tool like the MySQL command line or MySQL Workbench.
According to Error message the format should be one of these three:
[+|-][0]{0,1}[0-9]:[0-5][0-9]
eg. -04:30
or
[+|-][1][0-2]:[0-5][0-9]
e.g -12:00
or
SYSTEM
So have you tried doing it with quotes?
azure mysql server configuration set -n time_zone --value
"Europe/Paris"
CALL mysql.az_load_timezone();
Call this stored procedure from your session on the server,
If you are running the mysql.az_load_timezone command from MySQL Workbench, you may need to turn off safe update mode first using SET SQL_SAFE_UPDATES=0 or in Preferences->SQL Editor->Safe Updates(), and back to connect to your mysql server.
Related
I am trying to display audit logs for Cloud-SQL in the stack-driver-console. I have already enabled audit-log for Cloud-SQL in IAM.
I connect to mysql or postgres databases in Cloud-SQL and when I connect these audit logs are displayed in the console.
request: {
#type: "type.googleapis.com/google.cloud.sql.v1beta4.SqlInstancesGetRequest"
instance: "testpostgres"
But after this if I perform any operation in that particular database like SELECT or INSERT or DELETE, there is no audit-log (data-access) generated for both mysql and postgres. In mysql instance I have set the following database flags:
audit_log ON (this is in beta version)
For mysql if I add 1 more flag **general_log** I am able to get those DML statements but they come under a different log **cloudsql.googleapis.com%2Fmysql-general.log** and don't come under audit-logs
Similarly for postgres these statements come under a different log:
**cloudsql.googleapis.com%2Fpostgres.log**
I am new to this Cloud-SQL so not aware of the logging implemented there. Why no audit logs are generated when any DML is done for the particular database in that Cloud-SQL instance and should I set any other flag for this purpose?
DML are not logged on the audit logs by default on Cloud SQL. To see the DML logs in Logging, you need use pgAudit on your PostgreSQL. By the way pgAudit is only available in PostgreSQL instances.
Steps to enable pgaudit:
Enable pgaudit using gcloud command
gcloud sql instances patch [INSTANCE_NAME] --database-flags \ cloudsql.enable_pgaudit=on,pgaudit.log=all
Create the pgaudit extension in your postgres database
CREATE EXTENSION pgaudit;
Run a simple select statement on your postgres database
Query DML statements logs in Logging:
Open Logging -> Logs Explorer
In the query builder apply this filter:
resource.type="cloudsql_database" logName="projects/<your-project-name>/logs/cloudaudit.googleapis.com%2Fdata_access" protoPayload.request.#type="type.googleapis.com/google.cloud.sql.audit.v1.PgAuditEntry"
I used the quickstart for PostgresSQL for testing.
Query postgre database:
Filter Logging using the filter above:
I have to set the mysql server..... log_bin=ON.
I have managed to set the server_id and gtid_mode. I've read some documentation on mysql site.
However cant resolve this is the error.
mysql> SET GLOBAL log_bin=ON ;
ERROR 1238 (HY000): Variable 'log_bin' is a read only variable
Are you running MySQL inside a Compute Engine instance or are you using Cloud SQL? Changing the log_bin variable is currently not supported in Cloud SQL for MySQL (log_bin is not listed as a supported flag) and while a feature request to allow users to change this flag's value was submitted a while ago, changes don't appear to have been implemented as of yet.
If you're running MySQL inside a Compute Engine instance, I would suggest trying to declare --log-bin as a startup option instead.
I use SQLAlchemy with pymysql driver to connect to MySQL instance. This MySQL instance have its timezone configured as UTC.
Now, I want to execute a long SQL query (it is a hard coded SQL script...) which make heavy use of date functions against timestamp columns. I want this SQL to be executed under my country's local timezone (namely, it is +09:00).
Question
When you are making engine with pymysql driver, is it possible to set time_zone other than the server timezone?
Or at least, on certain connection?
See how to pass Custom DBAPI connect() arguments for example of how to get past the SQLAlchemy part of what you are trying to do.
There is an open feature request in pymysql repo for what you want: Allow specifiying a timezone for connections.
The pymysql.connections.Connection object accepts a parameter called init_command for which the docs state:
Initial SQL statement to run when connection is established.
A contributor provides the following example in the discussion of that issue:
init_command="SET SESSION time_zone='+00:00'"
So your create_engine might look something like this:
engine = create_engine(..., connect_args={"init_command": "SET SESSION time_zone='+09:00'"})
I've set up a System DSN to the MySQL database and the connection is okay when I test it. When I set up a linked server using that DSN it connects and I can see the catalog and tables but when I try to query it, I get an error that says "contains no columns that can be selected or the current user does not have permissions on that object". When I use the same settings to connect through MySQL Workbench it works and I can query the data. Any ideas?
Thanks.
Make sure that the service account has permissions.
I'm having problems using 'LOAD DATA INFILE' via a db.executesql() command: I'm getting "InternalError: (1148, u'The used command is not allowed with this MySQL version')"
A bit of digging, and I find I need to set --local-infile as a mysql option (MySQL: Enable LOAD DATA LOCAL INFILE). I've modified my.cnf, but that doesn't seem to be being picked up by web2py. How can I pass options to the mysql client which is used by web2py?