Skip to content

Comments

Add Flask context automatically to APScheduler executors#183

Open
mxdev88 wants to merge 1 commit intoviniciuschiele:masterfrom
mxdev88:with-app-context
Open

Add Flask context automatically to APScheduler executors#183
mxdev88 wants to merge 1 commit intoviniciuschiele:masterfrom
mxdev88:with-app-context

Conversation

@mxdev88
Copy link
Contributor

@mxdev88 mxdev88 commented Aug 14, 2021

This PR adds the Flask context to added or modified jobs automaticall. Most of the time when using Flask-APScheduler, you need to access something from your Flask app which requires to add the context yourself.

This should solve issues like mentioned here #176

@viniciuschiele
Copy link
Owner

I guess this code only works when the jobs are kept in memory, if you use a storage like sqlite/sqlalchemy it won't work.

@mxdev88
Copy link
Contributor Author

mxdev88 commented Aug 14, 2021

Another option may be to act on the part that runs the job rather than on the job itself. Any thoughts on this?

@viniciuschiele
Copy link
Owner

It is APScheduler that runs the jobs and it doesn't allow me to add some sort of "middleware" to initialize a Flask context before calling the actual method. If you find a workaround for that, I will be happy to merge it.

@mxdev88 mxdev88 changed the title Add Flask context automatically to added or modified jobs Add Flask context automatically to APScheduler executors Aug 22, 2021
@mxdev88
Copy link
Contributor Author

mxdev88 commented Aug 22, 2021

Hi @viniciuschiele - made another attempt, this time by decorating executors rather than the jobs. Let me know what you think. thanks!

@savchenko
Copy link

This does seem to work as intended. Tested as following:

SQLAlchemy jobstore

    SCHEDULER_JOBSTORES = {
        "default": SQLAlchemyJobStore(
            url=DatabaseConfig.SQLALCHEMY_DATABASE_URI
        )
    }

Decorated function

@scheduler.task("foo", id="foo", seconds=30)
def foo():
    do_something()

After starting Flask, the DB is populated with the corresponding data:

SELECT * FROM apscheduler_jobs
-- "id"	        "next_run_time" 	"job_state"
-- "foo"	1721815605.395674	"binary data"

do_something() is executed every 30 seconds with corresponding logging records created.

@viniciuschiele do you reckon it can be merged prior to the upcoming APScheduler v4.0 release?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants