Add Flask context automatically to APScheduler executors#183
Add Flask context automatically to APScheduler executors#183mxdev88 wants to merge 1 commit intoviniciuschiele:masterfrom
Conversation
|
I guess this code only works when the jobs are kept in memory, if you use a storage like sqlite/sqlalchemy it won't work. |
|
Another option may be to act on the part that runs the job rather than on the job itself. Any thoughts on this? |
|
It is |
27d8070 to
9c52004
Compare
|
Hi @viniciuschiele - made another attempt, this time by decorating executors rather than the jobs. Let me know what you think. thanks! |
9c52004 to
5119587
Compare
|
This does seem to work as intended. Tested as following: SQLAlchemy jobstore SCHEDULER_JOBSTORES = {
"default": SQLAlchemyJobStore(
url=DatabaseConfig.SQLALCHEMY_DATABASE_URI
)
}Decorated function @scheduler.task("foo", id="foo", seconds=30)
def foo():
do_something()After starting Flask, the DB is populated with the corresponding data: SELECT * FROM apscheduler_jobs
-- "id" "next_run_time" "job_state"
-- "foo" 1721815605.395674 "binary data"
@viniciuschiele do you reckon it can be merged prior to the upcoming APScheduler v4.0 release? |
This PR adds the Flask context to added or modified jobs automaticall. Most of the time when using Flask-APScheduler, you need to access something from your Flask app which requires to add the context yourself.
This should solve issues like mentioned here #176