Increasing the size of the queue in Elasticsearch? - exception

Ive been looking at my elasticsearch logs, and I came across the error
rejected execution (queue capacity 1000) on org.elasticsearch.search.action.SearchServiceTransportAction$23#6d32fa18
After looking up the error, the general and consensus was to increase the size of the queue as talked about here - https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-threadpool.html
The question I have is how to do I actually do this? Is there aconfiguration file somewhere that I am missing?

From Elasticsearch 5 onward you cannot use the API to update the threadpool search queue size. It is now a node-level settings. See this.
Thread pool settings are now node-level settings. As such, it is not possible to update thread pool settings via the cluster settings API.
To update the threadpool you have to add thread_pool.search.queue_size : <New size> in elasticsearch.yml file of each node and then restart elasticsearch.

To change the queue size one could add it in the config file for each of the nodes as follows:
threadpool.search.queue_size: <new queue size> .
However this would also require a cluster restart.
Up to Elasticsearch 2.x, you can update via the cluster-setting api and this would not require a cluster restart, however this option is gone with Elasticsearch 5.x and newer.
curl -XPUT _cluster/settings -d '{
"persistent" : {
"threadpool.search.queue_size" : <new_size>
}
}'
You can query the queue size as follows:
curl <server>/_cat/thread_pool?v&h=search.queueSize

As of Elasticsearch 6 the type of the search thread pool has changed to fixed_auto_queue_size, which means setting threadpool.search.queue_size in elasticsearch.yml is not enough, you have to control the min_queue_size and max_queue_size parameters as well, like this:
thread_pool.search.queue_size: <new_size>
thread_pool.search.min_queue_size: <new_size>
thread_pool.search.max_queue_size: <new_size>
I recommend using _cluster/settings?include_defaults=true to view the current thread pool settings in your nodes before making any changes. For more information about the fixed_auto_queue_size thread pool read the docs.

Related

PouchDB Replication throws error upon replication

When I try to replicate a remote couchdb (on ubuntu 14.04- 64 bit) with my local pouchdb, I encouter this strange error.
My couchdb is proxied via nginx and running on https. Traffic from client to nginx is ssl while nginx to couchdb is simple http. Cors requests are enabled in couchdb. Nginx configuration is most similar to couchdb recommended. Sync from database is working fine however getting below errors when debugging via chrome Version 54.0.2840.100 (64-bit) .
Following is the full stack trace of the error.
raven.min.js:2 Error: There was a problem getting docs.
at finishBatch (http://localhost:8100/lib/pouchdb/dist/pouchdb.js:6410:13)
at processQueue (http://localhost:8100/lib/ionic/js/ionic.bundle.js:27879:28)
at http://localhost:8100/lib/ionic/js/ionic.bundle.js:27895:27
at Scope.$eval (http://localhost:8100/lib/ionic/js/ionic.bundle.js:29158:28)
at Scope.$digest (http://localhost:8100/lib/ionic/js/ionic.bundle.js:28969:31)
at http://localhost:8100/lib/ionic/js/ionic.bundle.js:29197:26
at completeOutstandingRequest (http://localhost:8100/lib/ionic/js/ionic.bundle.js:18706:10)
at http://localhost:8100/lib/ionic/js/ionic.bundle.js:18978:7
at d (http://localhost:8100/lib/raven-js/dist/raven.min.js:2:4308) undefineda.(anonymous function) # raven.min.js:2(anonymous function) # ionic.bundle.js:25642(anonymous function) # ionic.bundle.js:22421(anonymous function) # angular.min.js:2processQueue # ionic.bundle.js:27887(anonymous function) # ionic.bundle.js:27895$eval # ionic.bundle.js:29158$digest # ionic.bundle.js:28969(anonymous function) # ionic.bundle.js:29197completeOutstandingRequest # ionic.bundle.js:18706(anonymous function) # ionic.bundle.js:18978d # raven.min.js:2
raven.min.js:2 Paused in lessondb replicate Error: There was a problem getting docs.
at finishBatch (http://localhost:8100/lib/pouchdb/dist/pouchdb.js:6410:13)
at processQueue (http://localhost:8100/lib/ionic/js/ionic.bundle.js:27879:28)
at http://localhost:8100/lib/ionic/js/ionic.bundle.js:27895:27
at Scope.$eval (http://localhost:8100/lib/ionic/js/ionic.bundle.js:29158:28)
at Scope.$digest (http://localhost:8100/lib/ionic/js/ionic.bundle.js:28969:31)
at http://localhost:8100/lib/ionic/js/ionic.bundle.js:29197:26
at completeOutstandingRequest (http://localhost:8100/lib/ionic/js/ionic.bundle.js:18706:10)
at http://localhost:8100/lib/ionic/js/ionic.bundle.js:18978:7
at d (http://localhost:8100/lib/raven-js/dist/raven.min.js:2:4308)
The network logs in chrome show that some requests are cancelled
I am using couchdb version - 1.6.1 and pouchdb version - 5.3.2.
I use following command to replicate dbs:
myDB.replicate.from(remote_db_url,{
live: true,
retry: true,
heartbeat: false
})
Also it would be great if someone can shed some light on heartbeat parameter .
Note: I'm not able to solve the error you describe. Maybe a full stack trace rather than a screenshot might help...
But I will try to shed some light on the heartbeat parameter: Reading the docs already helps a little. See the advanced options for the replicate method:
options.heartbeat: Configure the heartbeat supported by CouchDB which keeps the change connection alive.
So let's look into CouchDB's docs to see what this parameter does:
Networks are a tricky beast, and sometimes you don’t know whether there are no changes coming or your network connection went stale. If you add another query parameter, heartbeat=N, where N is a number, CouchDB will send you a newline character each N milliseconds. As long as you are receiving newline characters, you know there are no new change notifications, but CouchDB is still ready to send you the next one when it occurs.
So basically it seems to be a polling mechanism that sends a message (f.e. a newline) every n milliseconds (where n is the heartbeat value you specify) to make sure the connection between two databases is still working.
Setting the value to false will disable this mechanism.
Regarding the which value can be used for this parameter:
The PouchDB docs further state, that the changes method has a similar parameter described like this:
options.heartbeat: For http adapter only, time in milliseconds for server to give a heartbeat to keep long connections open. Defaults to 10000 (10 seconds), use false to disable the default.

Elasticsearch 5.0.0. cluster node not joining

Ok this shouldn't be this hard, I'm trying to run 2 nodes in an elasticsearch cluster and getting an exception when trying to start node-1(node-2 which is master is already started). Using elasticsearch v 5.0.0 for both instances
Exception: failed to send join request to master, reason RemoteTransportException can't add node found existing node with the same id but is a different node instance]
Node-1 config:
node.name: SANNNNN-1
network.host: 10.3.185.250
discovery.zen.ping.unicast.hosts: ["10.3.185.251:9300"]
Node-2 config:
node.name: SAN-2
network.host: 10.3.185.251
discovery.zen.ping.unicast.hosts: ["10.3.185.251:9300"]
Full Exception on node 2:
[INFO ][o.e.d.z.ZenDiscovery ] [SANNNNN-1] failed to send join request to master [{SAN-2}{DxExoYHHTu2-rFvuvQSuEg}{OfYBe97HQCmcha63CFiYlQ}{10.3.185.251}{10.3.185.251:9300}], reason [RemoteTransportException[[SAN-2][10.3.185.251:9300][internal:discovery/zen/join]]; nested: IllegalArgumentException[can't add node {SANNNNN-1}{DxExoYHHTu2-rFvuvQSuEg}{hP4gLRugRgWzSuNnEhGHSw}{10.3.185.250}{10.3.185.250:9300}, found existing node {SAN-2}{DxExoYHHTu2-rFvuvQSuEg}{OfYBe97HQCmcha63CFiYlQ}{10.3.185.251}{10.3.185.251:9300} with the same id but is a different node instance]; ]
Ok so the issue was copying the elasticsearch folder from one node to another over scp. Elasticsearch saves the node id in elasticsearch/data/ folder. Deleted the data folder on one node and restarted it. The cluster is up and running. Hope this saves someone the hassle.
Remove the directory <Elastic search home>/data and restart the ES node, this issue is due to elastic search storing id in this directory, and this is a common mistake when copying one working elastic search directory from one node to another.
after fixing the issue, check the cluster status like this:
curl -X GET "localhost:9200/_cluster/health"
works fine with elastic search 6 as well
I had the same issue after cloning a Data Node in Azure. I ended up finding the Data file by starting in the root folder:
/datadisks/disk1/elasticsearch/data
I kept reading that others found the folder elsewhere and wanted to share here.

Google Compute instance won't mount persistent disk, maintains ~100% CPU

During some routine use of my web server (saving posts via WordPress), my instance suddenly jumped up to 400% CPU usage and wouldn't come back down below 100%. Restarting and stopping/starting the instance didn't change anything.
Looking at the last bit of my serial output:
[ 0.678602] md: Waiting for all devices to be available before autodetect
[ 0.679518] md: If you don't use raid, use raid=noautodetect
[ 0.680548] md: Autodetecting RAID arrays.
[ 0.681284] md: Scanned 0 and added 0 devices.
[ 0.682173] md: autorun ...
[ 0.682765] md: ... autorun DONE.
[ 0.683716] VFS: Cannot open root device "sda1" or unknown-block(0,0): error -6
[ 0.685298] Please append a correct "root=" boot option; here are the available partitions:
[ 0.686676] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0)
[ 0.688489] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 3.19.0-30-generic #34~14.04.1-Ubuntu
[ 0.689287] Hardware name: Google Google, BIOS Google 01/01/2011
[ 0.689287] ffffea00008ae400 ffff880024ee7db8 ffffffff817af477 000000000000111e
[ 0.689287] ffffffff81a7c6c0 ffff880024ee7e38 ffffffff817a9338 ffff880024ee7dd8
[ 0.689287] ffffffff00000010 ffff880024ee7e48 ffff880024ee7de8 ffff880024ee7e38
[ 0.689287] Call Trace:
[ 0.689287] [<ffffffff817af477>] dump_stack+0x45/0x57
[ 0.689287] [<ffffffff817a9338>] panic+0xc1/0x1f5
[ 0.689287] [<ffffffff81d3e5f3>] mount_block_root+0x210/0x2a9
[ 0.689287] [<ffffffff81d3e822>] mount_root+0x54/0x58
[ 0.689287] [<ffffffff81d3e993>] prepare_namespace+0x16d/0x1a6
[ 0.689287] [<ffffffff81d3e304>] kernel_init_freeable+0x1f6/0x20b
[ 0.689287] [<ffffffff81d3d9a7>] ? initcall_blacklist+0xc0/0xc0
[ 0.689287] [<ffffffff8179fab0>] ? rest_init+0x80/0x80
[ 0.689287] [<ffffffff8179fabe>] kernel_init+0xe/0xf0
[ 0.689287] [<ffffffff817b6d98>] ret_from_fork+0x58/0x90
[ 0.689287] [<ffffffff8179fab0>] ? rest_init+0x80/0x80
[ 0.689287] Kernel Offset: 0x0 from 0xffffffff81000000 (relocation range: 0xffffffff80000000-0xffffffffbfffffff)
[ 0.689287] ---[ end Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0)
(Not sure if it's obvious from that, but I'm using the standard Ubuntu 14.04 image)
I've tried taking snapshots and mounting them on new instances, and now I've even deleted the instance and mounted the disk on to a new one, still the same issue and exactly the same serial output.
I really hope my data has not been hopelessly corrupted. Not sure if anyone has any suggestions on recovering data from a persistent disk?
Note that the accepted answer for: Google Compute Engine VM instance: VFS: Unable to mount root fs on unknown-block did not work for me.
I posted this on another question, but this question is worded better, so I'll re-post it here.
What Causes This?
That is the million dollar question. After inspecting my GCE VM, I found out there were 14 different kernels installed taking up several hundred MB's of space. Most of the kernels didn't have a corresponding initrd.img file, and were therefore not bootable (including 3.19.0-39-generic).
I certainly never went around trying to install random kernels, and once removed, they no longer appear as available upgrades, so I'm not sure what happened. Seriously, what happened?
Edit: New response from Google Cloud Support.
I received another disconcerting response. This may explain the additional, errant kernels.
"On rare occasions, a VM needs to be migrated from one physical host to another. In such case, a kernel upgrade and security patches might be applied by Google."
How to recover your instance...
After several back-and-forth emails, I finally received a response from support that allowed me to resolve the issue. Be mindful, you will have to change things to match your unique VM.
Take a snapshot of the disk first in case we need to roll back any of the changes below.
Edit the properties of the broken instance to disable this option: "Delete boot disk when instance is deleted"
Delete the broken instance.
IMPORTANT: ensure not to select the option to delete the boot disk. Otherwise, the disk will get removed permanently!!
Start up a new temporary instance.
Attach the broken disk (this will appear as /dev/sdb1) to the temporary instance
When the temporary instance is booted up, do the following:
In the temporary instance:
# Run fsck to fix any disk corruption issues
$ sudo fsck.ext4 -a /dev/sdb1
# Mount the disk from the broken vm
$ sudo mkdir /mnt/sdb
$ sudo mount /dev/sdb1 /mnt/sdb/ -t ext4
# Find out the UUID of the broken disk. In this case, the uuid of sdb1 is d9cae47b-328f-482a-a202-d0ba41926661
$ ls -alt /dev/disk/by-uuid/
lrwxrwxrwx. 1 root root 10 Jan 6 07:43 d9cae47b-328f-482a-a202-d0ba41926661 -> ../../sdb1
lrwxrwxrwx. 1 root root 10 Jan 6 05:39 a8cf6ab7-92fb-42c6-b95f-d437f94aaf98 -> ../../sda1
# Update the UUID in grub.cfg (if necessary)
$ sudo vim /mnt/sdb/boot/grub/grub.cfg
Note: This ^^^ is where I deviated from the support instructions.
Instead of modifying all the boot entries to set root=UUID=[uuid character string], I looked for all the entries that set root=/dev/sda1 and deleted them. I also deleted every entry that didn't set an initrd.img file. The top boot entry with correct parameters in my case ended up being 3.19.0-31-generic. But yours may be different.
# Flush all changes to disk
$ sudo sync
# Shut down the temporary instance
$ sudo shutdown -h now
Finally, detach the HDD from the temporary instance, and create a new instance based off of the fixed disk. It will hopefully boot.
Assuming it does boot, you have a lot of work to do. If you have half as many unused kernels as me, then you might want to purge the unused ones (especially since some are likely missing a corresponding initrd.img file).
I used the second answer (the terminal-based one) in this askubuntu question to purge the other kernels.
Note: Make sure you don't purge the kernel you booted in with!
In order to recover your data, you need to create a brand new instance where you can ssh, and attach the corrupted disk to it as a secondary disk. More information can be found in this article. I would suggest taking a snapshot of the corrupted disk before attaching it, for backup purposes.

Unable to access Google Compute Engine instance using external IP address

I have a Google compute engine instance(Cent-Os) which I could access using its external IP address till recently.
Now suddenly the instance cannot be accessed using its using its external IP address.
I logged in to the developer console and tried rebooting the instance but that did not help.
I also noticed that the CPU usage is almost at 100% continuously.
On further analysis of the Serial port output it appears the init module is not loading properly.
I am pasting below the last few lines from the serial port output of the virtual machine.
rtc_cmos 00:01: RTC can wake from S4
rtc_cmos 00:01: rtc core: registered rtc_cmos as rtc0
rtc0: alarms up to one day, 114 bytes nvram
cpuidle: using governor ladder
cpuidle: using governor menu
EFI Variables Facility v0.08 2004-May-17
usbcore: registered new interface driver hiddev
usbcore: registered new interface driver usbhid
usbhid: v2.6:USB HID core driver
GRE over IPv4 demultiplexor driver
TCP cubic registered
Initializing XFRM netlink socket
NET: Registered protocol family 17
registered taskstats version 1
rtc_cmos 00:01: setting system clock to 2014-07-04 07:40:53 UTC (1404459653)
Initalizing network drop monitor service
Freeing unused kernel memory: 1280k freed
Write protecting the kernel read-only data: 10240k
Freeing unused kernel memory: 800k freed
Freeing unused kernel memory: 1584k freed
Failed to execute /init
Kernel panic - not syncing: No init found. Try passing init= option to kernel.
Pid: 1, comm: swapper Not tainted 2.6.32-431.17.1.el6.x86_64 #1
Call Trace:
[] ? panic+0xa7/0x16f
[] ? init_post+0xa8/0x100
[] ? kernel_init+0x2e6/0x2f7
[] ? child_rip+0xa/0x20
[] ? kernel_init+0x0/0x2f7
[] ? child_rip+0x0/0x20
Thanks in advance for any tips to resolve this issue.
Mathew
It looks like you might have an script or other program that is causing you to run out of Inodes.
You can delete the instance without deleting the persistent disk (PD) and create a new vm with a higher capacity using your PD, however if it's an script causing this, you will end up with the same issue. It's always recommended to backup your PD before making any changes.
Run this command to find more info about your instance:
gcutil --project= getserialportoutput
If the issue still continue, you can either
- Make a snapshot of your PD and make a PD's copy or
- Delete the instance without deleting the PD
Attach and mount the PD to another vm as a second disk, so you can access it to find what is causing this issue. Visit this link https://developers.google.com/compute/docs/disks#attach_disk for more information on how to do this.
Visit this page http://www.ivankuznetsov.com/2010/02/no-space-left-on-device-running-out-of-inodes.html for more information about inodes troubleshooting.
Make sure the Allow HTTP traffic setting on the vm is still enabled.
Then see which network firewall you are using and it's rules.
If your network is set up to use an ephemral IP, it will be periodically released back. This will cause your IP to change over time. Set it to static/reserved then (on networks page).
https://developers.google.com/compute/docs/instances-and-network#externaladdresses

Frequent worker timeout

I have setup gunicorn with 3 workers, 30 worker connections and using eventlet worker class. It is set up behind Nginx. After every few requests, I see this in the logs.
[ERROR] gunicorn.error: WORKER TIMEOUT (pid:23475)
None
[INFO] gunicorn.error: Booting worker with pid: 23514
Why is this happening? How can I figure out what's going wrong?
We had the same problem using Django+nginx+gunicorn. From Gunicorn documentation we have configured the graceful-timeout that made almost no difference.
After some testings, we found the solution, the parameter to configure is: timeout (And not graceful timeout). It works like a clock..
So, Do:
1) open the gunicorn configuration file
2) set the TIMEOUT to what ever you need - the value is in seconds
NUM_WORKERS=3
TIMEOUT=120
exec gunicorn ${DJANGO_WSGI_MODULE}:application \
--name $NAME \
--workers $NUM_WORKERS \
--timeout $TIMEOUT \
--log-level=debug \
--bind=127.0.0.1:9000 \
--pid=$PIDFILE
On Google Cloud
Just add --timeout 90 to entrypoint in app.yaml
entrypoint: gunicorn -b :$PORT main:app --timeout 90
Run Gunicorn with --log-level debug.
It should give you an app stack trace.
Is this endpoint taking too many time?
Maybe you are using flask without asynchronous support, so every request will block the call. To create async support without make difficult, add the gevent worker.
With gevent, a new call will spawn a new thread, and you app will be able to receive more requests
pip install gevent
gunicon .... --worker-class gevent
The Microsoft Azure official documentation for running Flask Apps on Azure App Services (Linux App) states the use of timeout as 600
gunicorn --bind=0.0.0.0 --timeout 600 application:app
https://learn.microsoft.com/en-us/azure/app-service/configure-language-python#flask-app
WORKER TIMEOUT means your application cannot response to the request in a defined amount of time. You can set this using gunicorn timeout settings. Some application need more time to response than another.
Another thing that may affect this is choosing the worker type
The default synchronous workers assume that your application is resource-bound in terms of CPU and network bandwidth. Generally this means that your application shouldn’t do anything that takes an undefined amount of time. An example of something that takes an undefined amount of time is a request to the internet. At some point the external network will fail in such a way that clients will pile up on your servers. So, in this sense, any web application which makes outgoing requests to APIs will benefit from an asynchronous worker.
When I got the same problem as yours (I was trying to deploy my application using Docker Swarm), I've tried to increase the timeout and using another type of worker class. But all failed.
And then I suddenly realised I was limitting my resource too low for the service inside my compose file. This is the thing slowed down the application in my case
deploy:
replicas: 5
resources:
limits:
cpus: "0.1"
memory: 50M
restart_policy:
condition: on-failure
So I suggest you to check what thing slowing down your application in the first place
Could it be this?
http://docs.gunicorn.org/en/latest/settings.html#timeout
Other possibilities could be your response is taking too long or is stuck waiting.
This worked for me:
gunicorn app:app -b :8080 --timeout 120 --workers=3 --threads=3 --worker-connections=1000
If you have eventlet add:
--worker-class=eventlet
If you have gevent add:
--worker-class=gevent
I've got the same problem in Docker.
In Docker I keep trained LightGBM model + Flask serving requests. As HTTP server I used gunicorn 19.9.0. When I run my code locally on my Mac laptop everything worked just perfect, but when I ran the app in Docker my POST JSON requests were freezing for some time, then gunicorn worker had been failing with [CRITICAL] WORKER TIMEOUT exception.
I tried tons of different approaches, but the only one solved my issue was adding worker_class=gthread.
Here is my complete config:
import multiprocessing
workers = multiprocessing.cpu_count() * 2 + 1
accesslog = "-" # STDOUT
access_log_format = '%(h)s %(l)s %(u)s %(t)s "%(r)s" %(s)s %(b)s "%(q)s" "%(D)s"'
bind = "0.0.0.0:5000"
keepalive = 120
timeout = 120
worker_class = "gthread"
threads = 3
I had very similar problem, I also tried using "runserver" to see if I could find anything but all I had was a message Killed
So I thought it could be resource problem, and I went ahead to give more RAM to the instance, and it worked.
You need to used an other worker type class an async one like gevent or tornado see this for more explanation :
First explantion :
You may also want to install Eventlet or Gevent if you expect that your application code may need to pause for extended periods of time during request processing
Second one :
The default synchronous workers assume that your application is resource bound in terms of CPU and network bandwidth. Generally this means that your application shouldn’t do anything that takes an undefined amount of time. For instance, a request to the internet meets this criteria. At some point the external network will fail in such a way that clients will pile up on your servers.
If you are using GCP then you have to set workers per instance type.
Link to GCP best practices https://cloud.google.com/appengine/docs/standard/python3/runtime
timeout is a key parameter to this problem.
however it's not suit for me.
i found there is not gunicorn timeout error when i set workers=1.
when i look though my code, i found some socket connect (socket.send & socket.recv) in server init.
socket.recv will block my code and that's why it always timeout when workers>1
hope to give some ideas to the people who have some problem with me
For me, the solution was to add --timeout 90 to my entrypoint, but it wasn't working because I had TWO entrypoints defined, one in app.yaml, and another in my Dockerfile. I deleted the unused entrypoint and added --timeout 90 in the other.
For me, it was because I forgot to setup firewall rule on database server for my Django.
Frank's answer pointed me in the right direction. I have a Digital Ocean droplet accessing a managed Digital Ocean Postgresql database. All I needed to do was add my droplet to the database's "Trusted Sources".
(click on database in DO console, then click on settings. Edit Trusted Sources and select droplet name (click in editable area and it will be suggested to you)).
Check that your workers are not killed by a health check. A long request may block the health check request, and the worker gets killed by your platform because the platform thinks that the worker is unresponsive.
E.g. if you have a 25-second-long request, and a liveness check is configured to hit a different endpoint in the same service every 10 seconds, time out in 1 second, and retry 3 times, this gives 10+1*3 ~ 13 seconds, and you can see that it would trigger some times but not always.
The solution, if this is your case, is to reconfigure your liveness check (or whatever health check mechanism your platform uses) so it can wait until your typical request finishes. Or allow for more threads - something that makes sure that the health check is not blocked for long enough to trigger worker kill.
You can see that adding more workers may help with (or hide) the problem.
The easiest way that worked for me is to create a new config.py file in the same folder where your app.py exists and to put inside it the timeout and all your desired special configuration:
timeout = 999
Then just run the server while pointing to this configuration file
gunicorn -c config.py --bind 0.0.0.0:5000 wsgi:app
note that for this statement to work you need wsgi.py also in the same directory having the following
from myproject import app
if __name__ == "__main__":
app.run()
Cheers!
Apart from the gunicorn timeout settings which are already suggested, since you are using nginx in front, you can check if these 2 parameters works, proxy_connect_timeout and proxy_read_timeout which are by default 60 seconds. Can set them like this in your nginx configuration file as,
proxy_connect_timeout 120s;
proxy_read_timeout 120s;
In my case I came across this issue when sending larger(10MB) files to my server. My development server(app.run()) received them no problem but gunicorn could not handle them.
for people who come to the same problem I did. My solution was to send it in chunks like this:
ref / html example, separate large files ref
def upload_to_server():
upload_file_path = location
def read_in_chunks(file_object, chunk_size=524288):
"""Lazy function (generator) to read a file piece by piece.
Default chunk size: 1k."""
while True:
data = file_object.read(chunk_size)
if not data:
break
yield data
with open(upload_file_path, 'rb') as f:
for piece in read_in_chunks(f):
r = requests.post(
url + '/api/set-doc/stream' + '/' + server_file_name,
files={name: piece},
headers={'key': key, 'allow_all': 'true'})
my flask server:
#app.route('/api/set-doc/stream/<name>', methods=['GET', 'POST'])
def api_set_file_streamed(name):
folder = escape(name) # secure_filename(escape(name))
if 'key' in request.headers:
if request.headers['key'] != key:
return 404
else:
return 404
for fn in request.files:
file = request.files[fn]
if fn == '':
print('no file name')
flash('No selected file')
return 'fail'
if file and allowed_file(file.filename):
file_dir_path = os.path.join(app.config['UPLOAD_FOLDER'], folder)
if not os.path.exists(file_dir_path):
os.makedirs(file_dir_path)
file_path = os.path.join(file_dir_path, secure_filename(file.filename))
with open(file_path, 'ab') as f:
f.write(file.read())
return 'sucess'
return 404
in case you have changed the name of the django project you should also go to
cd /etc/systemd/system/
then
sudo nano gunicorn.service
then verify that at the end of the bind line the application name has been changed to the new application name