Cannot drop the distribution database 'distribution' on managed instance - mysql

I am not able to delete the distributor and distribution database because it is saying that it is currently in use on Azure Managed Instance.I tried transactional replication between azure managed instance to azure sql vm. Then I was trying to delete replication ,publisher,subscriber and distributor.I was successful in dropping replication,publisher and subscriber but my distributor is not getting deleted.
I am trying to do:
exec sp_dropdistributor #no_checks = 1, #ignore_distributor = 1
Then I got this below statement as error:
Msg 21122, Level 16, State 1, Procedure sys.sp_dropdistributiondb,
Line 125 [Batch Start Line 6]
Cannot drop the distribution database 'distribution' because it is
currently in use.
I even tried to disable distributor using Disable publishing and distributor wizard.The process was unsuccessful.
What steps should I now follow to delete my distributor?

Ankita, can you please file support request for this issue to be troubleshooted? "New support request" option on Managed Instance Portal blade will guide you through the process.

I have also encountered this issue. Eventually I was able to drop the database via the Azure Portal.
Go to your SQL managed instance, scroll down in the "Overview" tab, open the distribution database and delete the database via the button on the top.
The proces which prevented deletion of the database via sp_dropdistributor will keep on running. It can't be killed via KILL. Haven't gotten any feedback on what to do about that yet.

Related

GCP Cloud SQL for MySQL general log generates multiple sub logs

I have a setup of a MySQL Instance installed on Google Cloud with the following flags:
general_log: on
log_output: FILE
On the client side, I'm connecting via Cloud SQL Proxy authentication with DBeaver. The issue is that when I execute queries containing new lines on DBeaver, the logs as shown on Logs Explorer page are being split into multiple sub-logs, each containing a line from that query. Is there some way I can concatenate these logs by reconfiguring the SQL Instance's flags or using a different GCP plugin for audit logging, other than general-log? I need to resolve this issue not on the client side (I'm aware that I can simply reformat the text editor on DBeaver to eliminate new line characters).
I'm aware of this new auditing plugin cloudsql_mysql_audit, but when I install it to my SQL Instance I can't see any logs at all.

Getting an error while using GCP Data Migration Service

I am trying to move my database from a Managed Digital Ocean MYSQL database to GCP Cloud SQL and I thought I'd give the Database Migration Service a try.
Note that I have already tried the One time mysql dump method and it
works fine. I just wanted to try out the Continuous method to minimize down time.
Before even creating the job and running, I try to "Test the job" but I get the following error:
zoho-tracker is the project name and destination-mysql-8 is the destination profile name. This is something else that confuses me, the button says "Go To Source" while the error string shows the destination profile name.
I have tried reading the docs as much as I could and I have checked the prerequisites too. Here are some points of information:
Source MYSQL version is 8.0.20 and destination value I am setting is 8.
The GTID Mode value that I checked using SHOW VARIABLES LIKE 'gtid_mode' is found to be ON.
The server_id value is 2(non zero).
All tables in the relevant DB are innoDB.
The user on the source has the following privileges: SELECT, EXECUTE, RELOAD, SHOW VIEW, REPLICATION CLIENT, REPLICATION SLAVE.
The user/pass/host combo has been verified many times and is correct.
The user was created with 'username'#'%' string and not 'username'#'localhost'.
The user was created with mysql_native_password plugin (although I have tried different users which use the caching_sha2 plugin too).
The connectivity method is IP allowlist and all connections while testing are allowed, so I don't think I need to add the destination IP to the allowlist.
The version_comment variable in the source has a value of 'Source Distribution' not 'Maria DB'.
Any pointers would be appreciated.

SSIS Packages running on SQl Server Agent randomly cannot connect to Snowflake

For the past week, multiple SSIS packages running on SQL Server Agent that load data into Snowflake have started returning the follow message randomly.
"Failed to acquire connection "snowflake". Connection may not be configured correctly or you may not have the right permissions on this connection."
We are seeing this message across multiple jobs and each of the jobs is loading multiple tables and its not happening on each call to Snowflake within the projects, but just on one or two tasks in jobs that have 100s.
We are using the 2.20.2 drivers from Snowflake
We have ran the jobs while WireShark was capturing network traffic and were received by the network team. They didn't have much luck because the ACK messages were not being shown.
We also ran Process Monitor while the jobs ran and we did not find anything that alluded to any issues
We also dug though the logs from the Snowflake driver and found the calls right before and right after, but no messages for the task that failed. Since those logs bounce around on which file they are sending to, its a bit hard to track sequential actions when multiple task on a job are running together.
We also installed SnowCD and ran it and it returned a full success message.
The user that runs the jobs on SQL Server Agent is an Admin on the server and has SysAdmin rights on the Sql Sever instance.
The warehouse the drivers are connected to are a size Large with a max of 3 clusters (was at 1 when the issue started, but upped it to 3 to see if that helped)
Jobs are running on Windows Server 2016 DataCenter in Azure
SQL Server instance is Sql Sever 2016 13.0.4604.0
We cannot figure out why we are suddenly and randomly using connection to Snowflake.
Some ideas to help get these packages working:
Add a retry to the tasks that are failing. The task would move onto the next step only upon success:
https://www.mssqltips.com/sqlservertip/5625/how-to-retry-sql-server-integration-services-ssis-control-flow-tasks/
You can also combine the truncate and insert into one step using the insert overwrite into command which will allow your package to run quicker and have one less task for failure:
https://docs.snowflake.net/manuals/sql-reference/sql/insert.html#insert-using-overwrite
Once the SSIS packages are consistently completing, you can analyze the logs at the point of failure to see if there is any pattern to help you identify the root cause.

Mysql Server: Unable to connect to remote host. Catalog download has failed

I'm getting the following message on taskeng.exe console:
Unable to connect to remote host. Catalog download has failed
It seems to be related with the manifestUpdate. Which updates the catalog at a fixed time:
Automatic updates
You can configure MySQL Installer to automatically update the MySQL product catalog once per day. To enable this feature and set the update time, click the wrench icon on the Installer dashboard.
The next window configures the Automatic Catalog Update. Enable or disable this feature, and also
set the hour.
So, why am I getting that error?
I still not having very clear what's that for exactly. Does it look for possible updates of MySQL
products?
Can I disable the task to get rid of that error?
Look in Windows Task Scheduler. On the left, open:
Task Scheduler Library\MySQL\Installer
Right-click in the ManifestUpdate entry and select "Disable".
Original post from Andrew Clements in MySQL Forums (http://forums.mysql.com/read.php?10,626478,626575#msg-626575)
It is a basic permission problem. Go to C:\ProgramData and right click on on MySQL folder, go to security tab under group users and click edit, under group and users select your computer user and select all ticks under allow, apply and close. This worked for me.

Unable to create indexes in Sphinx after an emergency server restart [Can't create TCP/IP socket]

I'm trying to execute the command in the Windows console:
C:\SphinxSearch\bin\indexer --all --config C:\SphinxSearch\sphinx.conf
But I get an error:
ERROR: index 'indexname': sql_connect: Can't create TCP/IP socket
(10093) (DSN=mysql://root:*#localhost:3306/test).
A data source is mysql. Before the server restart everyone works fine.
How can I fix it?
I'm having the same error 10093. It's a windows error code by the way. In my case it occurs when trying to run the indexer through the system account via a scheduled task. If I'm running it directly as administrator, there's not a problem.
According to the site above:
Either your application hasn't called WSAStartup(), or WSAStartup() failed, or--possibly--you are accessing a socket which the current active task does not own (i.e. you're trying to share a socket between tasks).
In my case I'm thinking it might be the last one, some security problem due to user SYSTEM being used in my scheduled task. I was able to solve it by using my admin user instead: in the scheduled task, I set to use my local admin account with the option to "Run when user is logged on or not" and "Do not store password". I've also checked "Run with highest privileges". This seems to have done the trick as now my indexes are rotating on schedule.