AWS Thinkbox Discussion Forums

mongodb reconnections

Hey there,

So while troubleshooting why our mongo logs are so huge, i noticed its mostly filled with:

Wed Jul  9 16:57:02.237 [conn42275875] end connection 172.18.1.210:57437 (2304 connections now open)
Wed Jul  9 16:57:02.244 [initandlisten] connection accepted from 172.18.3.142:61795 #42275876 (2305 connections now open)
Wed Jul  9 16:57:02.245 [conn42275876] end connection 172.18.3.142:61795 (2304 connections now open)
Wed Jul  9 16:57:02.245 [conn42275785] end connection 172.18.0.171:56677 (2303 connections now open)
Wed Jul  9 16:57:02.249 [initandlisten] connection accepted from 172.18.12.98:53749 #42275877 (2304 connections now open)
Wed Jul  9 16:57:02.250 [conn42275877] end connection 172.18.12.98:53749 (2303 connections now open)
Wed Jul  9 16:57:02.250 [initandlisten] connection accepted from 172.18.3.181:62839 #42275878 (2304 connections now open)
Wed Jul  9 16:57:02.251 [initandlisten] connection accepted from 172.18.12.138:60582 #42275879 (2305 connections now open)
Wed Jul  9 16:57:02.253 [conn42275879] end connection 172.18.12.138:60582 (2304 connections now open)

Docs tell me ( docs.mongodb.org/manual/faq/deve … ted-events )

“If you see a very large number connection and re-connection messages in your MongoDB log, then clients are frequently connecting and disconnecting to the MongoDB server. This is normal behavior for applications that do not use request pooling, such as CGI. Consider using FastCGI, an Apache Module, or some other kind of persistent application server to decrease the connection overhead.”

It seems like the application (deadline in this case) would need to support request pooling. Were you guys considering that?
I’m not sure how big this constant reconnection overhead is, but the logs are sometimes getting so large its impacting the full drivespace (its a high performance, but relatively small space raid).

They suggest ‘quieting’ the log, but im afraid what other messages that would filter out…

Hmm, we were definitely using connection pools before… That seems to have changed somehow, because I did a quick test with a single Slave and can see from the logs that it’s re-establishing a new connection every time.

I’ll have to do some investigating as to when exactly that changed, I imagine this was probably an unintended regression…

Cool, thanks for looking into this Jon

Did you guys figure anything out about this? We still get a flood of these in the log

Yeah, this seemed to be some internal issue with the MongoDB driver; 7.0 isn’t exhibiting this same behaviour.

Unfortunately, the newer driver that fixed this issue isn’t backwards-compatible with the one we’re shipping with 6.2, so this probably won’t make it in the 6.2.1 patch.

If it’s causing problems for you guys, we can maybe look at the code for the driver, and maybe doing a custom build of it with a fix… But none of us have really looked at that code before, or made a build of it, so I’m not sure how much work would be involved exactly.

I wonder if this affects performance of the clients + servers, since the connections keep having to be reinitialized.

I’ll see what I can do, I already pulled down the code for a separate issue; I’ll dig around.

Okay, I managed to find the offending code, as well as the fix the 10gen guys put in for it in the later driver iterations.

I was able to backport it in the version we use without too much work, I’ve attached a version with the fix in it. Should be a drop-in replacement on the client machines; let me know how it goes!

Cheers,
Jon

EDIT: Note that this didn’t make it into the build Ryan made today. If you haven’t deployed it yet, it’d probably be easiest to drop the DLL in the bin.zip’s in the repo, and let the auto-update take care of the deployment.
MongoDB.Driver.zip (132 KB)

Thanks Jon, ill inject into the zip!! Much appreciated

Privacy | Site terms | Cookie preferences