python - Re-running SQLAlchemy queries when connections get stale -


i have sqlalchemy connecting postgres via pgpool. pgpool configured recycle connections 60s old.

i have 2 problems:

1) sometimes, huge query takes more 60s (i know it's bad... we're working on improving this) , subsequent queries fail because rely on same old connection no longer valid.

2) similarly, when start pyramid app using ipython, connections stale here when stop think moment.

when attempting perform query session stale connection, exception saying:

operationalerror: (psycopg2.operationalerror) connection terminated due client idle limit reached error:  connection terminated due client idle limit reached 

sqlalchemy's pessimistic disconnect handling docs recommend testing connection when out of pool. however, connection becoming stale after being checked out, wouldn't much.

i think right solution refresh session's connection upon getting type of error:

session = mysession() # using scoped_session here query = session.query(...) try:     rows = [r r in query] except operationalerror:     # somehow tell query.session use new connection here , try again? 

how can this?

for me, executing

session.close_all() 

makes session able run queries, @ least until idles out again.

interestingly, running session.remove() or session.close(), sqlalchemy documentation seems imply should work, doesn't work; makes future queries give invalidrequesterror: can't reconnect until invalid transaction rolled back (which of course session.rollback() doesn't fix) until calling session.close_all() .

i hope can provide insight why session.close_all() trick, , may not appropriate solution production, should @ least make don't have restart whole app in ipython session.


Comments

Popular posts from this blog

yii2 - Yii 2 Running a Cron in the basic template -

asp.net - 'System.Web.HttpContext' does not contain a definition for 'GetOwinContext' Mystery -

mercurial graft feature, can it copy? -