SQl Database back up showing error "Exception of type "System.OutofMemoryException" was thrown.(mscorlib)" - mysql

I need to take a backup of a SQL database with data. I select
Tasks -> Generate Scripts... -> Next -> Next and in Table view option I changed Script Data FALSE to TRUE -> Next -> Select all -> Script to New Query Window -> Finish.
But it end up with an error:
"Exception of type "System.OutofMemoryException" was thrown.(mscorlib)".
I have checked the memory space of my drives. C drive is having more than 10 GB and D drive is having more than 3 GB but the database is only 500MB. That error is showing for 1 particular table "TBL_SUMM_MENUOPTION" which is an empty table. May I know how to fix this issue? How to take that database backup with out this error?
Screenshot for better understanding:

As #Alejandro stated in his comment, Instead of taking backup using Tasks -> Generate Scripts... -> Next -> Next and in Table view option I changed Script Data FALSE to TRUE -> Next -> Select all -> Script to New Query Window -> Finish. I took backup by using Tasks -> Backup ->Database which is very simple.

Related

Unable to see the "Properties" option on right clicking the user database and column under the table in Azure Data Studio [ MAC SYSTEM ]

Steps
1.Created a user database - DB1
2.Created a table - TB1 with columns - cm1 and cm2
3.Installed the extension - "Database Administration Tool Extension"
4.Relaunched the application
5.Right clicked on the database - DB1 ................................
Observation -> I could see some options like 'Manage' but not properties
6.Right clicked on the column 'cm1' under the created table TB1.........................
Observation -> I could see only refresh option but not properties
7.Additional Step: Tried searching for other relevant extension but couldn't spot.
Question -> How can I get the properties option enabled for DB and columns in AzureDataStudio in MAC System

Copy images from SQL image field to Azure Blob Storage as blob blocks

I have a SQL table where I have stored a lot of images, something with the following structure:
CREATE TABLE [dbo].[DocumentsContent](
[BlobName] [varchar](33) NULL,
[Content] [varbinary](max) NULL
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO
I need to copy all images from database to Azure Blobs. The problem is that there are 3TB of images, so a script that will read them one by one (from SQL) and copy to Azure is not the desired solution.
I've tried with SSIS and Data Factory, but both are creating only one file with all information, not one file for each row how I need (or at lest in the way I did).
There is any tool that can do in a decent time? Or there is any way to used SSIS or Data Factory for this?
Thanks!
You can use 2 activities in Azure data factory to achieve it:
use a lookup activity to get the count of rows
use foreach activity with the rowcount as input and since each row is independent ,you can keep parallel execution enabled.
Within foreach,use copy activity with source as SQL with filter condition to rownumber and destination as blob.
This would generate individual files for each row and would occur parallely
Other Ideas:
Option 1)
SSIS Data Flow Task - "Export Column" to Local Disk
Use Filezilla Pro (or eqivalent) to Multithread Transfer to Azure
Option 2)
Mount Azure Blob via NFS3.0
SSIS Data Flow Task - "Export Column" to Mounted Disk
Split workload for parallel execution
WHERE CHECKSUM([BlobName]) % 5 = 0
WHERE CHECKSUM([BlobName]) % 5 = 1
WHERE CHECKSUM([BlobName]) % 5 = 2
WHERE CHECKSUM([BlobName]) % 5 = 3
WHERE CHECKSUM([BlobName]) % 5 = 4

How to prevent ERROR 2013 (HY000) using Aurora Serverless

When performing long-running queries on Aurora Serverless I have seen the following errors after a few minutes:
ERROR 1080 (08S01) Forcing close of thread
ERROR 2013 (HY000) Lost connection to MySQL server during query
The queries are using mysql LOAD DATA LOCAL INFILE to load large (multi-GB) data files into the database.
How can I avoid these errors?
To solve this, you can change the parameter group item net_write_timeout to a more suitable value. Here's instructions for completing the steps from the console:
Go to RDS Console
Click "Parameter Groups" in the left pane
Click "Create Parameter Group"
On the Parameter Group Details page, for Type, choose DB Cluster Parameter Group; then give it a name and description, and click "Create"
Click on the name of the parameter group you created in step 4
Search for "net_write_timeout"
Click on the checkbox next to the parameter and click "Edit Parameters"
Change the value to an integer between 1-31536000 for the number of seconds you want it to wait before timing out, and click "save changes"
Click on Databases in the left pane
Click on the database and click "modify"
Under Additional Configuration > Database Options > DB Cluster Parameter Group, select the parameter group you created in step 4, and click "Continue"
Select "Apply Immediately" and click "Modify Cluster"
Break up your large, multi-GB uploads into smaller chunks. Aurora works better (and faster) loading one hundred 10MB files at once rather than one 1GB file. Assuming your data is already in a loadable format:
Split the file into parts using split
split -n l/100 --additional-suffix="_small" big_file.txt
This results in 100 files like xaa_small xab_small, etc.
find files that match the split suffixes using find
files=$(find . -name 'x*_small')
loop through the files and load each in parallel
for file in $files; do
echo "load data local infile '$file' into table my_table;" |
mysql --defaults-file=/home/ubuntu/.my.cnf --defaults-group-suffix=test &
done

EditRecord failed because of the default alias Error

I have a simple form in access which contains a macro that saves the edit time and a flag after the current record is updated.
However when the macro attempts to execute I get an error that says:
EditRecord failed because the default alias represent a record which is read only
I have been searching over the internet for an answer but I haven't been able to find anything yet that helps.
This form only has one user on it at a time and none of the other forms using the same table are open while it's being used.
I have pk and it's an Auto Increment without duplicate values. Beside this I have changed the locking status to No Lock
I also get an error when I use DoCmd in my macro instead of openQuery.
The data mode in openQuery is "Edit" so it doesn't create any errors.
I'm using the Macro Tools in Access 2016. I'm not sure about the differences between an Access Macro and Access VBA but this is an Access Macro.
Here is the code I have so far:
SetWarning Off
Repeat Count:1
Open Query:
UPDATE table
SET editDate = now, reviewDate = now, flag = 0
WHERE ID=[Forms]![FormName].[ID];
view: Datasheet
Data Mode: Edit
GoToRecord: Next

Batch job to export data into CSV

I'm doing my first ABAP job and I don't have much experience so I need a little help.
What I want to do to create a batch job that runs every morning at a specific time, fetches data from different tables and exports it as a csv file. To create that batch job I can use transaction code SM36 or SM37.
But I need some help how to fetch the data?
Has anyone an example code that I can use or take a look at?
TheG is right, it sounds like you're trying to learn ABAP from scratch with no guidance. That's difficult but here are some basics:
There are three parts to this:
1. create a program
2. generate a file
3. schedule the job
For 1,
If you go to SE38, you can create a new report. You'll have to check with your colleagues about the namespace, but usually you just start the program with Z (which puts it in the 'customer' namespace).
In the entry box of SE38, you can type DEMO to pull up lots of sap-provided demo reports. The names usually give you a hint about what they demo and you can probably find one that mentions creating a file.
Once you create your own report through SE38 by typing in the name and hitting enter, you can use SELECT...INTO TABLE or SELECT ... ENDSELECT to query the database tables. Highlight select and click the blue i icon to pull up SAP's internal documentation.
At it's most basic, you can use the WRITE statement to print out the rows and columns of your data.
Once you have your report running, then scheduling it with SM36 will be more self explanatory.
This is very basic ABAP reporting program stuff. Making the report run as a background/batch job is the least of the concerns. Let us help you walk through this.
-> Have you done any reporting programming before ?
-> Do you have the list of tables from which you want the data and do you know how they are linked ?
-> Do you know how often this report would be run and what would be the selection criteria required ?
-> Did you check with the functional team whether you want 'delta pull' or 'full pull' every time you run the report ?
-> Do you have the file share where you want to output the file ? Is it on the presentation server or the application server ? If not presentation server can you reason out why not ?
-> Did you confirm on the file name and how it should look ?
-> Do you know how to generate a CSV file ? If this is a 'production requirement' ,are there reusable frameworks for handling file operations in your company ?
-> Do you have the final format of how the CSV file would look ?
-> Did you verify with the functional team whether they want the output data in external format for some fields ?
-> Did you check if there are date fields in your output and what format you want it to be for consistency ?
If you are familiar with ABAP a little bit, explore answers for above, write a report and getting it running in dialog mode. Then revert back to us and we will help you on how to run it as a batch job.