Discussion:
[tornado] Tornado Coroutines memory usage
Web Architect
2018-09-19 13:05:34 UTC
Permalink
Hi,

We have an ecommerce website built on django. In a back ground processing
of large amount of data, I am using tornado coroutine with producer
consumer model for concurrency. The producer reads from MySQL database and
puts the record in the queue. Consumer picks the record from the Queue and
processes it. This runs as a separate python task (Celery task).

I have observed that the memory usage of the above increases drastically
and the memory is not released. Hence, is there a chance that the Tornado
producer/consumer coroutine is consuming memory and not releasing it? Would
really appreciate if anyone could help me with the above as this would help
me in narrowing down on the cause for high memory usage we are facing.

Thanks.
--
You received this message because you are subscribed to the Google Groups "Tornado Web Server" group.
To unsubscribe from this group and stop receiving emails from it, send an email to python-tornado+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Web Architect
2018-09-19 13:34:20 UTC
Permalink
Tornado version is 4.5.2 and Python version is 2.7.10.

On Wednesday, September 19, 2018 at 6:35:34 PM UTC+5:30, Web Architect
Post by Web Architect
Hi,
We have an ecommerce website built on django. In a back ground processing
of large amount of data, I am using tornado coroutine with producer
consumer model for concurrency. The producer reads from MySQL database and
puts the record in the queue. Consumer picks the record from the Queue and
processes it. This runs as a separate python task (Celery task).
I have observed that the memory usage of the above increases drastically
and the memory is not released. Hence, is there a chance that the Tornado
producer/consumer coroutine is consuming memory and not releasing it? Would
really appreciate if anyone could help me with the above as this would help
me in narrowing down on the cause for high memory usage we are facing.
Thanks.
--
You received this message because you are subscribed to the Google Groups "Tornado Web Server" group.
To unsubscribe from this group and stop receiving emails from it, send an email to python-tornado+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Ben Darnell
2018-09-20 00:42:18 UTC
Permalink
Post by Web Architect
I have observed that the memory usage of the above increases drastically
and the memory is not released. Hence, is there a chance that the Tornado
producer/consumer coroutine is consuming memory and not releasing it? Would
really appreciate if anyone could help me with the above as this would help
me in narrowing down on the cause for high memory usage we are facing.
There aren't any known issues that would cause this. Sometimes coroutines
that raise exceptions can lead to reference cycles that take some time to
be garbage collected, but even then they will be freed eventually. You'll
need to use python memory profilers to see where the space is going (I use
objgraph for this, although it's a little complicated. There are other
options too).

-Ben
--
You received this message because you are subscribed to the Google Groups "Tornado Web Server" group.
To unsubscribe from this group and stop receiving emails from it, send an email to python-tornado+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Web Architect
2018-09-20 05:33:32 UTC
Permalink
Hi Ben,

Thanks for the response. I would surely use Python profilers to debug this.

Is there a chance that the Tornado Queue is not getting cleaned up (Queue
between the producer and consumer)?. The celery task is an always running
task. Hence, the memory grabbed to maintain the Queue might not be cleaned
up. As mentioned, the producer passes large number of objects (Django ORM
object) to the consumer. I always have to restart the task to free the
memory.

Thanks.
Post by Ben Darnell
Post by Web Architect
I have observed that the memory usage of the above increases drastically
and the memory is not released. Hence, is there a chance that the Tornado
producer/consumer coroutine is consuming memory and not releasing it? Would
really appreciate if anyone could help me with the above as this would help
me in narrowing down on the cause for high memory usage we are facing.
There aren't any known issues that would cause this. Sometimes coroutines
that raise exceptions can lead to reference cycles that take some time to
be garbage collected, but even then they will be freed eventually. You'll
need to use python memory profilers to see where the space is going (I use
objgraph for this, although it's a little complicated. There are other
options too).
-Ben
--
You received this message because you are subscribed to the Google Groups "Tornado Web Server" group.
To unsubscribe from this group and stop receiving emails from it, send an email to python-tornado+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Loading...