While executing SSIS package, I got following errors,
The buffer manager failed a memory allocation call for 10484608 bytes, but was unable to swap out any buffers to relieve memory pressure. 20 buffers were considered and 20 were locked. Either not enough memory is available to the pipeline because not enough are installed, other processes were using it, or too many buffers are locked.
[DTS.Pipeline] Error: Thread "SourceThread0" has exited with error code 0xC0047038.
[DTS.Pipeline] Error: The Data Flow task failed to create a buffer to call PrimeOutput for output "XML Source 1" (91) on component "GeneralCongfigurations" (98). This error usually occurs due to an out-of-memory condition.
It happen when trying insert data to SQL table from an XML file with 'Script Component'.
How to solve it?
The message tells you SSIS is using 20 buffers, 10Mb each - about 200Mb together.
It is not very big amount of memory, even on 1Gb machine you would not run out of memory.
It is likely other processes consume the rest of the memory - check the task manager. Often it is SQL Server who consumes all the memory - if you run SQL and SSIS on same machine, restrict the amount of memory SQL is allowed to consume (in SQL Server properties), leaving some memory for SSIS - I would recommend leaving at least 0.5Gb.
You should also look at all of the log messages. In particular, Lookup transforms can log a lot of information about how much memory they're using. You can get some very detailed logs about memory allocation.
I faced the same issue. My XML source is of around 2MB. when I start the package it started throwing this OutOfMomory warning. My server has around 8GB RAM. So it was not memory issues . Current machine SQL server service was taking close to 6GB space. I could check this from Task Manager. As my SSIS package destination was connecting to difference database I did not want the current server SQL service to be running. So the moment I stopped this SQL SERVER service, my SSIS package executed successfully.
#Chris Pickford
You want to use PerfMon Counters:
Memory\
Availabe Bytes/MBytes
Committed Byrtes
SQLServer:SSIS Pipeline\
Buffer Memory
Buffers in Use
Buffers Spooled
Also, if you've got logging on your package, look at event User:BufferSizeTuning
Related
Is there a memory usage limitation for SQL Server Management Studio and is there a way to free SSMS memory without rebooting it? I am currently using SSMS 2012 and I always getting error message like below:
'An error occurred while executing batch. Error message is: Exception of type 'System.OutOfMemoryException' was thrown.'
My computer has 24GB RAM and the free memory is 17GB but I still get above message when executing some heavy queries. I store some xml data in tables so the query can bring huge data back. However this issue is not occurring all the time. usually it is after I run SSMS for a while that makes me think it is a memory usage limition. at this moment I have to reboot SSMS to get my query executed. Is there a way to free memory without rebooting SSMS?
Álvaro González is right. I installed a 64 bit SSMS and haven't seen same issue again so far.
I am creating PDF documents on AWS server with Sidekiq for processing this job on background.
While the process of creating the PDF file, the [Rails] application is pooling database to check out whether the PDF file was created or not (the interval: 2 seconds).
This morning I got this error message on the Sidekiq side:
ActiveRecord::ConnectionTimeoutError: could not obtain a database connection within 5.000 seconds (waited 5.000 seconds)
I am using Amazon RDS with MySQL on it.
As a temporary solution, I increased the pool parameter from 10 to 30 in database.yml, however I realize this is just a temporary patch.
How to fix it properly?
Thank you
I think that your solution is actually the correct one.
ActiveRecord::ConnectionPool is thread based, i.e. it tries to obtain a separate connection for each thread that wants to work with the database. If there are more threads wanting to access the database then the total size of the connection pool (configured with the pool option in database.yml), ConnectionPool tries to wait up to 5 seconds by default if a connection from some other thread is freed. After these 5 seconds time out, the ActiveRecord::ConnectionTimeoutError exception is raised.
Now, Sidekiq uses 25 worker threads by default. So, under higher load, it is perfectly possible that there will be up to 25 jobs (threads) trying to access the db at the same time. If your pool was set to 10, the excess workers had to wait for the other ones to complete and probably some thread had to wait too long.
So, either enlarge the size of the connection pool to at least a little higher value then 25 (the number of sidekiq workers), just as you did, or run your sidekiq with less workers by running it like sidekiq -c 5. Finally, always ensure that you allow enough incoming connections on the MySQL side (by default it's over 100).
Polling generally doesn't scale as a solution.
Without knowing more about the structure of your application I would be tempted to leverage some concurrency constructs from a gem like concurrent-ruby.
Specifically, the creation of a PDF file maps quite closely to the concept of a Future or a Promise
I would look at rearchitecting your worker:
Your PDF generation code should be a Promise. It should open a db connection only long enough to write the resulting PDF to the database; not while it is doing pdf generation as well.
Your main application code should spin up a PDF generation promise on Sidekiq as usual. Instead of polling the database; this code simply waits for the Promise to complete or fail; if the promise completes succesfully the PDF is in the database, if it fails you have an exception trace etc.
As always, ymmv
Working on Sql Report builder 3.0, all reports performed right but suddenly when I run the report an error window popup which shows the following error.
System.Web.Services.Protocols.SoapException: An internal error occurred on the report server. See the error log for more details.
Microsoft.ReportingServices.Diagnostics.Utilities.InternalCatalogException: An internal error occurred on the report server. See the error log for more details.
<br><br> System.IO.FileLoadException: Could not load file or assembly 'Microsoft.ReportingServices.ProcessingCore' or one of its dependencies. There is not enough space on the disk. (Exception from HRESULT: 0x80070070)
at Microsoft.ReportingServices.WebServer.ReportingService2010Impl.CreateReportEditSession(String Report, String Parent, Byte[] Definition, String& EditSessionID, Warning[]& Warnings)
at Microsoft.ReportingServices.WebServer.ReportingService2010.CreateReportEditSession(String Report, String Parent, Byte[] Definition, String& EditSessionID, Warning[]& Warnings)
Furthermore, I have given more 2 GB of free space to the log file and it will again surround all the remaining space. As I last viewed only 8.3 MB of free space remaining.
OS: Windows server 2003
As we have two recovery models.
Simple Recovery Model
Full / Bulk Logged Recovery Model
To my experience in this scenerio, most SQL Servers there is no backup of the transaction log. Full backups or differential backups are common practice, but transaction log backups are really seldom. So the transaction log file grows forever (until the disk is full). In this case the recovery model should be set to "simple". Don't forget to modify the system databases "model" and "tempdb", too.
A backup of the database "tempdb" makes no sense, so the recovery model of this db should always be "simple".
Process:
What i do on my database.
Right Click on properties -> Option -> Set Recovery Model: Simple.
Then, right click -> Tasks -> Shrink -> Files.
Just thats it will make space.
But for better practise we have to set recovery model: Full, in it log files not grow more and more we have to backup the log files. And to understand this scenario better I suggest you see these free videos.
• SQL Server Backups Demystified
• SQL Server Logging Essentials
I'm running into an interesting threadding problem while running a D programming that uses the MySQL C API. I am getting error 2013 "Lost connection to MySQL server during query." The problem appears to occurs when enough threads flood the network interface buffer, but the server still has more to transfer. This is my best guess based on some research and running the program on two different computers. One computer has a 100Mb connection to the server and the other has a 1Gb connection. The computer with the 100Mb connection throws the error, while the 1Gb computer does not. I am wondering if I am running into what is described in the first paragraph of How to Write a Threaded Client in the MySQL documentation. If I am, what do I need to do with SIGPIPE and how do I do it?
For those who are interested, I am calling mysql_library_init before any library call and I am creating a new MYSQL* for each thread with mysql_init and mysql_real_connect. Also of note, the queries that I am executing are small SELECTs, only a few thousand records returned from each query and all queries are executed from the same table.
Please try this before mysql_real_connect:
my_bool myb = 1;
mysql_options(conn, mysql_option.MYSQL_OPT_RECONNECT, &myb);
Also please check this mysql troubleshooting page:
http://dev.mysql.com/doc/refman/5.5/en/gone-away.html
I face a very annoyed exception in glassfish which is
SEVERE: Exception in thread "RMI RenewClean-[192.168.1.2:8686]"
SEVERE: PermGen space
java.lang.OutOfMemoryError: PermGen space
my hardware resources r high, and when I open task manager and notice the resources, it's available,
this exception force me to restart my pc every 10 to 15 minutes :( what should I do?
You need to increase the amount of PermGen space using the -XX:MaxPermSize=256m flag.
See this related SO question
In order to set this up in Glassfish, use the following steps:
Connect to the admin interface of your Glassfish server (localhost:4848)
Move to Application Server > JVM Setting > JVM Options and check the global amount of memory allocated to your instance of Glassfish (should be something like -Xmx512m or more) and add one JVM Option with value:
-XX:MaxPermSize=256m
The amount of memory depends on the amount you need. Increase it if it keeps crashing, but reading the PermGen article may help in determining the right amount.
The permgem space is one of the most irritating errors in glassfish.
The permgem space is an error that appears when you use a lot of deploys or redeploys in the server, because the server reserves memory and never frees it. I recommend you to supervise the server with Apache JMeter to see the amount of memory (and if it is near the max, restart it before it crash).
To temporary fix it, you must include some variables in the server to improve his memmory consumption in the glassfish administrator like said amccormack.
I recommend you to use
-XX:PermSize=512m
-XX:MaxPermSize=512m
-XX:+CMSClassUnloadingEnabled
By the way, if the permgem space error appears, the server will not respond (even to asadmin stop-domain) . But you can easily restart if you kill the java process that runs glassfish and call asadmin start-domain. I think that it is quicker than restart all the server.