Davy Durham
2013-05-29 21:36:11 UTC
Hi,
I'm somewhat versed in Tornado and have read through most of the relevant
document, but I'm not sure how to solve this particular problem.
I've created a "proxy" application which accepts websocket upgrade
requests. In my open() method, I create an IOStream to another server and
"fairy" the data to and fro. Essentially I'm wrapping another network
protocol over a websocket. And I need to do this for many simultaneous
connections en mass, asynchronously. It's working properly right now, but I
have a concern:
In my on_message() method, I call self.my_stream.write(data) to send data
received from a websocket to the other server. (I also go the other way
too, but I'll discuss later on if necessary.) Obviously, if there's the
network capacity between the web browser and my webserver is large but the
network capacity between my webserver and the other server is small, then
data may continue to arrive at a fast rate, but every time I call
self.my_stream.write(data) it will be forced to buffer that data. My
concern is that a resource exhaustion attach could be leveraged again my
webserver.
I see in the WebSocketHandler code that we don't add the socket back to the
ioloop for READING until on_message() returns. But, of course if I don't
return from on_message() until the my_stream's write buffer is back below a
threshold, then the ioloop won't be able to process other websocket's data.
I played with gen a bit, but wasn't exactly sure how that is supposed to
work. I had no success.
Here's a basic outline of the program in case someone can suggest how to
deal with this issue:
class WebSocketPassThru(tornado.websocket.WebSocketHandler):
def open(self):
# connect to remote server ...
self.my_stream = new IOStream(...)
...
def on_message(self, message):
...
self.my_stream.write(message)
...
def on_close(self)
...
self.my_stream.close()
So, basically, on_message() is going to be repeatedly called and
my_stream.write() is going to continue to buffer data which is a memory
problem if the data cannot be dumped to the network as fast as its arriving.
Thanks
I'm somewhat versed in Tornado and have read through most of the relevant
document, but I'm not sure how to solve this particular problem.
I've created a "proxy" application which accepts websocket upgrade
requests. In my open() method, I create an IOStream to another server and
"fairy" the data to and fro. Essentially I'm wrapping another network
protocol over a websocket. And I need to do this for many simultaneous
connections en mass, asynchronously. It's working properly right now, but I
have a concern:
In my on_message() method, I call self.my_stream.write(data) to send data
received from a websocket to the other server. (I also go the other way
too, but I'll discuss later on if necessary.) Obviously, if there's the
network capacity between the web browser and my webserver is large but the
network capacity between my webserver and the other server is small, then
data may continue to arrive at a fast rate, but every time I call
self.my_stream.write(data) it will be forced to buffer that data. My
concern is that a resource exhaustion attach could be leveraged again my
webserver.
I see in the WebSocketHandler code that we don't add the socket back to the
ioloop for READING until on_message() returns. But, of course if I don't
return from on_message() until the my_stream's write buffer is back below a
threshold, then the ioloop won't be able to process other websocket's data.
I played with gen a bit, but wasn't exactly sure how that is supposed to
work. I had no success.
Here's a basic outline of the program in case someone can suggest how to
deal with this issue:
class WebSocketPassThru(tornado.websocket.WebSocketHandler):
def open(self):
# connect to remote server ...
self.my_stream = new IOStream(...)
...
def on_message(self, message):
...
self.my_stream.write(message)
...
def on_close(self)
...
self.my_stream.close()
So, basically, on_message() is going to be repeatedly called and
my_stream.write() is going to continue to buffer data which is a memory
problem if the data cannot be dumped to the network as fast as its arriving.
Thanks
--
You received this message because you are subscribed to the Google Groups "Tornado Web Server" group.
To unsubscribe from this group and stop receiving emails from it, send an email to python-tornado+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/groups/opt_out.
You received this message because you are subscribed to the Google Groups "Tornado Web Server" group.
To unsubscribe from this group and stop receiving emails from it, send an email to python-tornado+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/groups/opt_out.