SSIS low on virtual memory in debug mode - ssis

I run packages in debug mode in Visual Studio, but often get 'low on virtual memory' warnings,
I also often run multiple instances of Visual Studio to enable multiple packages to run for different purposes simultaneously.
The machine has 64 gig of memory, and more than 30 gig is free at the time I start getting the following in the output window. Any ideas? (I've tried it with larger values for DefaultBufferMaxRows and corresponding increases in DefaultBufferSize)
Information: 0x4004800C at Data Flow Task, SSIS.Pipeline: The buffer manager detected that the system was low on virtual memory, but was unable to swap out any buffers. 32 buffers were considered and 32 were locked. Either not enough memory is available to the pipeline because not enough is installed, other processes are using it, or too many buffers are locked.
Information: 0x4004800F at Data Flow Task: Buffer manager allocated 40 megabyte(s) in 4 physical buffer(s).

Related

Windows Server 2019 Cannot empty recycle bin

I have a Windows Server 2019 with 16GB Ram and 400GB HD space. One of the apps saves images uploaded by users which used up all of our HD space. Among other things I found 2 directories with 1 million and 1.2 million text files so I deleted these as the first action to free up some space. All of these went into the recycle bin (at the time I didn't know I could do a shift+del to delete them directly bypassing the recycle bin. Now we we have about 30GB free space on the hard-drive, but cannot empty the recycle bin.
If I open recycle bin, it just hangs while calculating the number of files and space. As it does this its slowly eating up memory until it uses all memory and the server crashes. If I right click on the recycle bin and select empty, nothing appears to happen, but if I look at task manager I can see that Windows Explorer is slowly eating up memory until the system crashes again. So even though I don't open the GUI, the recycle bin is still calculating things which eats up memory until it crashes.
I tried doing this with PowerShell running the command Clear-RecycleBin using the force parameter, it appears to hang in the command window and I can see in task manager that its processing and once again, eating up memory until the system crashes.
I'm really stuck here. How can I empty the recycle bin with out making it to first count the files and estimate the size of data it will remove?
Thanks.

Google openrefine don't load big csv-file

When i try to create project, i load csv file with 3,5 millions rows(400mb)
and refine doesn't upload it.
it indicates 100% 1037 mb
i opened refine.ini and fixed memory limit, but there is no result
NOTE: This file is not read if you run the Refine executable directly
# It is only read of you use the refine shell script or refine.bat
no_proxy="localhost,127.0.0.1"
#REFINE_PORT=3334
#REFINE_HOST=127.0.0.1
#REFINE_WEBAPP=main\webapp
# Memory and max form size allocations
#REFINE_MAX_FORM_CONTENT_SIZE=104857600
REFINE_MEMORY=100000M
# Set initial java heap space (default: 256M) for better performance with large datasets
REFINE_MIN_MEMORY=100000M
# Some sample configurations. These have no defaults.
#ANT_HOME=C:\grefine\tools\apache-ant-1.8.1
#JAVA_HOME=C:\Program Files\Java\jdk1.6.0_25
#JAVA_OPTIONS=-XX:+UseParallelGC -verbose:gc -Drefine.headless=true
#JAVA_OPTIONS=-Drefine.data_dir=C:\Users\user\AppData\Roaming\OpenRefine
# Uncomment to increase autosave period to 60 mins (default: 5 minutes) for better performance of long-lasting transformations
#REFINE_AUTOSAVE_PERIOD=60
What i should do?
Based on the testing I did and published at https://groups.google.com/d/msg/openrefine/-loChQe4CNg/eroRAq9_BwAJ, to process 3.5 million rows you probably need to allocate around 8Gb RAM to have a reasonably responsive project.
As documented in OpenRefine changing the port and host when executable is run directly, when running OpenRefine on Windows where you set the options depends on whether you are starting OpenRefine via the exe file or the bat file.
To allocate over 4Gb of RAM, you definitely need to be using a 64-bit Java version - please check what version of Java OpenRefine is running in (it will use the Java specified in JAVA_HOME). However, you may find issues allocating 4Gb on 32-bit Java on a 64-Bit OS (see Maximum Java heap size of a 32-bit JVM on a 64-bit OS)

CloudSQL database crashes periodically (Out of memory)

We are having a problem where our cloudSQL database crashes periodically.
The error we are seeing in the logs is:
[ERROR] InnoDB: Write to file ./ib_logfile1failed at offset 237496832, 1024 bytes should have been written, only 0 were written. Operating system error number 12. Check that your OS and file system support files of this size. Check also that the disk is not full or a disk quota exceeded.
From what I understand, error number 12 means 'Cannot allocate memory'. Is there a way we can configure cloudsql to leave a larger buffer of free memory? The alternative would be to upgrade to have more memory, but from what I understand cloudSQL automatically uses all the memory available to it... Is this likely to reduce the problem or would it likely continue in the same way?
Are there any other things we can do to reduce this issue?
It is possible your system is running out of disk space rather than memory, especially if you are running in a HA config.
(If disk isn't the issue you should file a GCP support ticket rather than here)

sql server 2008 memory caching

I have a service that is running and connected to sql server 2008 database, the problem is that i have queries that takes a long time when run for the first time, but when cached it is finishing very fast. Does SQL server 2008 makes automatic clear cache every period of time?
SQL Server will not release memory unless there is memory pressure on the server or you explicitly tell it to.
See Microsoft support:
http://support.microsoft.com/kb/321363
Another cause could be that other database objects which need to be put in memory are pushing the ones you are using out of the buffer. In this case more memory allocated to the instance or more efficient queries will help.
So either there is memory pressure from other applications on the server or you do not have enough memory allocated to the instance for your current workload, but there is not regular scheduled process per se that cleans out SQL Server memory buffers.

SSIS's BufferSizeTuning says "Memory pressure was alleviated, buffer manager is not throttling allocations anymore"

I have SSIS packages that do a simple data transfer between 2 SQL Servers. There's one parent package and 6 child packages that have been built the same way. I have setup child packages to be run as separate processes (ExecuteOutOfProcess=True property). Also I have enabled logging of User:BufferSizeTuning property in each child package.
Everything works fine in DEV server which is pretty the same as PROD server. But in the PROD server I'm getting the following two messages from the User:BufferSizeTuning property (taken from sysssislog table):
Memory pressure was alleviated, buffer manager is not throttling allocations anymore
Buffer manager is throttling allocations to keep in-memory buffers around 199MB
Furthermore the job in PROD server usually runs about 2-3 hours (in some cases 11 hours!!) when in DEV - ~30 mins. Both server are SSIS 2008 servers.