Mysql - Views are automatically converted into empty tables with default datatype tinyint(4) - mysql

In google cloud hosted platform, for my project i created some mysql views. approximately those views are having 10k rows.
After some time views are automatically converted into empty tables with table name as respective view name. Columns in that table are set to default datatype tinyint(4).
At the time of views to table conversion, views are deleted automatically and created table has empty rows.
If i execute an select statement with view_name it fetches data from created table not from view( because there is no view in that name ). so it gives empty data.
If i delete those empty tables and recreate that views then i can able to get those data's on select query.
After sometime, again views are automatically converted into tables. Then it select query returns empty data.
This process happens regularly.
Why views are automatically converted into empty tables?
Snapshot of a converted table

Related

How to Create Timestamp column (Inside Pipeline) in destination SQL database table while migrating data from ADLS to SQL

I am migrating bulk of parquetfiles from ADLS to SQL database table so inside ForEach i used copy activity and it's copy data successfully for all tables.Now in that every table i have to add add column timestamp so it's gives the DATE in that column when that data loaded in that respective table.so what should i do in that pipeline so that in that table one timestamp column gets added and give the DATE when data gets loaded.
Use a data flow with a derived column called "timestamp" set to currentTimestamp()

How to import certain data from file on AWS Aurora

Problem: I have an Aurora RDS database that has a table where the data for a certain column was deleted. I have a snapshot of the DB from a few days ago that I want to use to populate the said column with the values from the snapshot. The issue is that certain rows have been deleted from the live DB in the meantime and I don't want to include them again.
I want to mount the snapshot, connect to it and then SELECT INTO OUTFILE S3 the table that interests me. Then I will LOAD DATA FROM S3 into the live DB, selecting only the column that interests me. But I haven't found information about what happens if the number of rows differ, namely if the snapshot has rows that were deleted in the meantime from the live DB.
Does the import command take the ID column into consideration when doing the import? Should I also import the ID column? I don't want to recreate the rows in question, I only want to populate the existing rows with the values from the column I want from the snapshot.
ALTER TABLE the destination table to add the column you are missing. It will be empty of data for now.
LOAD DATA your export into a different table than the ultimate destination table.
Then do an UPDATE with a JOIN between the destination table and the imported table. In this update, copy the values for the column you're trying to restore.
By using an inner join, it will only match rows that exist in both tables.

Update MySQL database Table from another Table in different location

I have the following issue related to I have in localhost(my computer) a Table in a database which I use to update the data for a month. Once data is correct, I need to update the Table in the database which resides in the server.
I use Navicat to do the work and it only transfer data deleting the actual database in the server and sending all the data from my localhost.
The problem is that the Table now has almost 300.000 records stored and it takes too long transfering the data leaving the database empty for some time.
Is there any way I could use that only update the data without deleting the whole table?
export local table with different name as mysqldump or just csv, 300k rows is not a big deal and use a different table now.
then upload the table 2 to db and use a query to update table 1 using table2 data.

sql not executing properly

I have two sql files: The first one creates database, tables, and stored procedures. The second populates created tables with 90000 entries.
The creating sql file creates a total of 1 database, 26 tables and 104 stored procedures.
The populate sql file adds 90000 entries.
For some reason when I execute the create file it works perfectly every time.
When I execute the populate file it works halfway. It populates half the tables, and the other half will stay empty. I noticed if I wait around 2 minutes after executing the create file and then try to execute the populate file then it works perfectly. Why is it doing that? Is there no way to populate tables very quickly without having to wait?
I am using the latest version of mysql and I have tried executing contents of the populate file via phpmyadming, and that yielded the same results.
I have to assume that the file that creates database and tables also create PRIMARY KEYS and Indexes. Have you considered splitting the files up into:
Create Database and Tables
Load Data
Create Primary Keys and Indexes
Create Stored Procedures.
As was also suggested try increasing the bulk_insert_buffer_size and change the way you insert from a single query as you posted to:
INSERT INTO faults(id, fault_name) VALUES (...), (...) ...

SQL Server: unique key for batch loads

I am working on a data warehousing project where several systems are loading data into a staging area for subsequent processing. Each table has a "loadId" column which is a foreign key against the "loads" table, which contains information such as the time of the load, the user account, etc.
Currently, the source system calls a stored procedure to get a new loadId, adds the loadId to each row that will be inserted, and then calls a third sproc to indicate that the load is finished.
My question is, is there any way to avoid having to pass back the loadId to the source system? For example, I was imagining that I could get some sort of connection Id from Sql Server, that I could use to look up the relevant loadId in the loads table. But I am not sure if Sql Server has a variable that is unique to a connection?
Does anyone know?
Thanks,
I assume the source systems are writing/committing the inserts into your source tables, and multiple loads are NOT running at the same time...
If so, have the source load call a stored proc, newLoadStarting(), prior to starting the load proc. This stored proc will update a the load table (creates a new row, records start time)
Put a trigger on your loadID column that will get max(loadID) from this table and insert as the current load id.
For completeness you could add an endLoading() proc which sets an end date and de-activates that particular load.
If you are running multiple loads at the same time in the same tables...stop doing that...it's not very productive.
a local temp table (with one pound sign #temp) is unique to the session, dump the ID in there then select from it
BTW this will only work if you use the same connection
In the end, I went for the following solution "pattern", pretty similar to what Markus was suggesting:
I created a table with a loadId column, default null (plus some other audit info like createdDate and createdByUser);
I created a view on the table that hides the loadId and audit columns, and only shows rows where loadId is null;
The source systems load/view data into the view, not the table;
When they are done, the source system calls a "sp__loadFinished" procedure, which puts the right value in the loadId column and does some other logging (number of rows received, date called, etc). I generate this from a template as it is repetitive.
Because loadId now has a value for all those rows, it is no longer visible to the source system and it can start another load if required.
I also arrange for each source system to have its own schema, which is the only thing it can see and is its default on logon. The view and the sproc are in this schema, but the underlying table is in a "staging" schema containing data across all the sources. I ensure there are no collisions through a naming convention.
Works like a charm, including the one case where a load can only be complete if two tables have been updated.