Back

Explore Courses Blog Tutorials Interview Questions
0 votes
1 view
in AWS by (19.1k points)

I use Celery with RabbitMQ in my Django app (on Elastic Beanstalk) to manage background tasks and I daemonized it using Supervisor. The problem now is that one of the period tasks that I defined is failing (after a week in which it worked properly), the error I've got is:

[01/Apr/2014 23:04:03] [ERROR] [celery.worker.job:272] Task clean-dead-sessions[1bfb5a0a-7914-4623-8b5b-35fc68443d2e] raised unexpected: WorkerLostError('Worker exited prematurely: signal 9 (SIGKILL).',)

Traceback (most recent call last):

  File "/opt/python/run/venv/lib/python2.7/site-packages/billiard/pool.py", line 1168, in mark_as_worker_lost

    human_status(exitcode)),

WorkerLostError: Worker exited prematurely: signal 9 (SIGKILL).

all the processes managed by the supervisor are up and running properly (supervisorctl status says RUNNING).

I tried to read several logs on my ec2 instance but no one seems to help me in finding out what is the cause of the SIGKILL. What should I do? How can I investigate?

These are my celery settings:

CELERY_TIMEZONE = 'UTC'

CELERY_TASK_SERIALIZER = 'json'

CELERY_ACCEPT_CONTENT = ['json']

BROKER_URL = os.environ['RABBITMQ_URL']

CELERY_IGNORE_RESULT = True

CELERY_DISABLE_RATE_LIMITS = False

CELERYD_HIJACK_ROOT_LOGGER = False

An this is my supervisord.conf:

[program:celery_worker]

environment=$env_variables

directory=/opt/python/current/app

command=/opt/python/run/venv/bin/celery worker -A com.cygora -l info --pidfile=/opt/python/run/celery_worker.pid

startsecs=10

stopwaitsecs=60

stopasgroup=true

killasgroup=true

autostart=true

autorestart=true

stdout_logfile=/opt/python/log/celery_worker.stdout.log

stdout_logfile_maxbytes=5MB

stdout_logfile_backups=10

stderr_logfile=/opt/python/log/celery_worker.stderr.log

stderr_logfile_maxbytes=5MB

stderr_logfile_backups=10

numprocs=1

[program:celery_beat]

environment=$env_variables

directory=/opt/python/current/app

command=/opt/python/run/venv/bin/celery beat -A com.cygora -l info --pidfile=/opt/python/run/celery_beat.pid --schedule=/opt/python/run/celery_beat_schedule

startsecs=10

stopwaitsecs=300

stopasgroup=true

killasgroup=true

autostart=false

autorestart=true

stdout_logfile=/opt/python/log/celery_beat.stdout.log

stdout_logfile_maxbytes=5MB

stdout_logfile_backups=10

stderr_logfile=/opt/python/log/celery_beat.stderr.log

stderr_logfile_maxbytes=5MB

stderr_logfile_backups=10

numprocs=1

1 Answer

0 votes
by (44.3k points)

You might be having a memory leak and the OS’s oom-killer is destructing your process for inappropriate behaviour.

grep oom /var/log/messages.

If there are messages, then that is your problem.

MyPeriodicTask().run()

If there is no message, run this periodic process manually in a shell:

Related questions

Welcome to Intellipaat Community. Get your technical queries answered by top developers!

28.4k questions

29.7k answers

500 comments

94.1k users

Browse Categories

...