If you have connection leaks on C3p0 which means the Connections are being checked-out that don't make it back into the pool. You will be in trouble...
By default, C3p0 will wait for the connection forever if there is no one available in the pool currently.
When testing against the application, i found some threads are wait for the connections for a long time, and after monitoring the JMX console in VisualVM, all the connections are taken, here is the thread dump i took:
"1658227@qtp-25018827-56" - Thread t@321
java.lang.Thread.State: WAITING
at java.lang.Object.wait(Native Method)
- waiting on <103344b> (a com.mchange.v2.resourcepool.BasicResourcePool)
at com.mchange.v2.resourcepool.BasicResourcePool.awaitAvailable(BasicResourcePool.java:1315)
at com.mchange.v2.resourcepool.BasicResourcePool.prelimCheckoutResource(BasicResourcePool.java:557)
at com.mchange.v2.resourcepool.BasicResourcePool.checkoutResource(BasicResourcePool.java:477)
at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool.checkoutPooledConnection(C3P0PooledConnectionPool.java:525)
at com.mchange.v2.c3p0.impl.AbstractPoolBackedDataSource.getConnection(AbstractPoolBackedDataSource.java:128)
at org.apache.ibatis.session.defaults.DefaultSqlSessionFactory.openSessionFromDataSource(DefaultSqlSessionFactory.java:72)
at org.apache.ibatis.session.defaults.DefaultSqlSessionFactory.openSession(DefaultSqlSessionFactory.java:32)
I am not discussing why the code makes connection leaks here in this post.(Actually, we did not invoke session.close from our code finally ), i just want to make sure if you meet this kind of similar pattern of problem, then you can temperately change some C3p0 configuration before you fix your bad code...
we are using C3p0 0.9.1.2 version now, here is the source code which have the infinity loop to wait for the connection, FYI
1297 while ((avail = unused.size()) == 0)
1298 {
1299 // the if case below can only occur when 1) a user attempts a
1300 // checkout which would provoke an acquire; 2) this
1301 // increments the pending acquires, so we go to the
1302 // wait below without provoking postAcquireMore(); 3)
1303 // the resources are acquired; 4) external management
1304 // of the pool (via for instance unpoolResource()
1305 // depletes the newly acquired resources before we
1306 // regain this' monitor; 5) we fall into wait() with
1307 // no acquires being scheduled, and perhaps a managed.size()
1308 // of zero, leading to deadlock. This could only occur in
1309 // fairly pathological situations where the pool is being
1310 // externally forced to a very low (even zero) size, but
1311 // since I've seen it, I've fixed it.
1312 if (pending_acquires == 0 && managed.size() < max)
1313 _recheckResizePool();
1314
1315 this.wait(timeout);
1316 if (timeout > 0 && System.currentTimeMillis() - start > timeout)
1317 throw new TimeoutException("A client timed out while waiting to acquire a resource from " + this + " -- timeout at awaitAvailable()");
1318 if (force_kill_acquires)
1319 throw new CannotAcquireResourceException("A ResourcePool could not acquire a resource from its primary factory or source.");
1320 ensureNotBroken();
1321 }
timeoutvalue is set to 0 by default in C3p0, which means wait indefinitely
checkoutTimeout
Default: 0
The number of milliseconds a client calling getConnection() will wait for a Connection to be checked-in or acquired when the pool is exhausted. Zero means wait indefinitely. Setting any positive value will cause the getConnection() call to time-out and break with an SQLException after the specified number of milliseconds.
And also if you have connection leaks with your code, and you hardly find it in short time, then you should pay attention to this parameter:
unreturnedConnectionTimeout
Default: 0
Seconds. If set, if an application checks out but then fails to check-in [i.e. close()] a Connection within the specified period of time, the pool will unceremoniously destroy() the Connection. This permits applications with occasional Connection leaks to survive, rather than eventually exhausting the Connection pool. And that's a shame. Zero means no timeout, applications are expected to close() their own Connections. Obviously, if a non-zero value is set, it should be to a value longer than any Connection should reasonably be checked-out. Otherwise, the pool will occasionally kill Connections in active use, which is bad. This is basically a bad idea, but it's a commonly requested feature. Fix your $%!@% applications so they don't leak Connections! Use this temporarily in combination with debugUnreturnedConnectionStackTraces to figure out where Connections are being checked-out that don't make it back into the pool!
So you can try to set these two parameters to the same time as your goal keeper, say 30 seconds. So that you can prevent some extremely leak cases, but dev need to find the root cause for sure :)
checkoutTimeout= 30000
unreturnedConnectionTimeout=30
also adjust your
maxPoolSize to 30 or more instead of 15 by default under a heavy load situation.