Create an account

Very important

  • To access the important data of the forums, you must be active in each forum and especially in the leaks and database leaks section, send data and after sending the data and activity, data and important content will be opened and visible for you.
  • You will only see chat messages from people who are at or below your level.
  • More than 500,000 database leaks and millions of account leaks are waiting for you, so access and view with more activity.
  • Many important data are inactive and inaccessible for you, so open them with activity. (This will be done automatically)


Thread Rating:
  • 749 Vote(s) - 3.51 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Job has been attempted too many times or run too long

#1
<!-- language-all: lang-php -->

I have a job that works flawless locally, but in production I run into issues where it doesn't work. I've encompassed the entire `handle()` with a `try/catch` and am not seeing anything logged to Bugsnag, despite many other exceptions elsewhere from being deployed.


public function handle() {
try {

// do stuff

} catch (\Exception $e) {
Bugsnag::notifyException($e);

throw $e;
}
}

According to [Laravel Horizon][1] this queue job runs for `0.0026001930236816406` seconds and I never see it work and never see any other errors in the `failed_jobs` table as it relates to this job.

**config/queue.php**

'redis' => [
'driver' => 'redis',
'connection' => 'default',
'queue' => 'default',
'retry_after' => (60 * 10), // 10 minutes
'block_for' => null,
],

**config/horizon.php**

'environments' => [
'production' => [
'supervisor' => [
'connection' => 'redis',
'queue' => [
'default',
],
'balance' => 'auto',
'processes' => 10,
'tries' => 3,

// 10 seconds under the queue's retry_after to avoid overlap
'timeout' => (60 * 10) - 10, // Just under 10 mins
],

If something is causing this job to retry over and over, how can I find out how? I'm at a loss.

**Investigation thus far**

- My expectation is I should be able to run the query:

<!-- language: lang-sql -->

SELECT DISTINCT exception, COUNT(id) as errors
FROM failed_jobs
WHERE payload LIKE '%[TAG-JOB-HAS]%'
GROUP BY exception;

To see more than this error message:

> Job has been attempted too many times or run too long

but that's all I see.

- [Laravel Horizon][1]'s dashboard shows the job in question is running for < 1 second, so I know it's not actually timing out.


[1]:

[To see links please register here]

Reply

#2
I had the same problem

I fixed it by increasing the 'retry_after' parameter

make sure the retry_after value is greater than the time it takes a job to run

in **config/queue.php** file

'connections' => [

'sync' => [
'driver' => 'sync',
],

'database' => [
'driver' => 'database',
'table' => 'jobs',
'queue' => 'default',
'retry_after' => 9000,
],

Reply

#3
Try to catch the exception in the failed method given by laravel<br>

```php
/**
* The job failed to process.
*
* @param Exception $exception
* @return void
*/
public function failed(Exception $exception)
{
// Send user notification of failure, etc...
}
```

and check whether your default queue driver in local is sync then its expected behavior.

Reply

#4
According to [documentation][1], you can handle job failing in two common ways:

- using failed job events
- using `failed()` method.

In the first case, you can handle all jobs using `Queue::failing()` method. You'll receive `Illuminate\Queue\Events\JobFailed` event as a parameter, and it contains exception.

In another case, you can use `failed()` method, it should be placed near your `handle()` method. You can receive `Exception $exception` as a parameter too.

Example:

```php
public function failed(\Throwable $exception)
{
// Log failure
}
```

Hope this helps.

[1]:

[To see links please register here]

Reply

#5


If you've seen this `MaxAttemptsExceededException` in your error logs or `failed_jobs` table and you don't have a clue what happened to the job, let me try to explain what may have happened. It's either:

The job timed out and it can't be attempted again.
The job was released back to the queue and it can't be attempted again.

If your job processing time exceeded the timeout configuration, the worker will check the maximum attempts allowed and the expiration date for the job and decide if it can be attempted again. If that's not possible, the worker will just mark the job as failed and throw that `MaxAttemptsExceededException`.

Also if the job was released back to the queue and a worker picks it up, it'll first check if the maximum attempts allowed was exceeded or the job has expired and throw `MaxAttemptsExceededException` in that case.


[To see links please register here]

Reply

#6
Perhaps this will help someone: don't use `dd()` in queued tasks.
Reply

#7
I had the same problem

I resolved the issue by using the below code in Job class.

public $failOnTimeout = false;

It will continue if there is timeout or fail job. I also increased timeout time.

public $timeout = 120000;

Reference :

[To see links please register here]

Reply

#8
This solved my issue php artisan queue:work --timeout=600 --tries=30
Reply

#9
I was finally able to fix it. This is the solution:

👉 **You have to set `retry_after` (in config/queue) and `timeout` (in config/horizon).**

The two values ​​work together. Either value throws the exception "has been attempted too many times or run too long. The job may have previously timed out."

`config/queue.php`:

'redis' => [
'driver' => 'redis',
'connection' => 'default',
'queue' => env('REDIS_QUEUE', 'default'),
'retry_after' => 60 * 7, // always greater than retry_after
'after_commit' => true, // check this if you dispatch jobs/events inside of DB transactions.
'block_for' => null,
],
/* ... */


`config/horizon.php`:

'defaults' => [
'supervisor-1' => [
'connection' => 'redis',
'queue' => ['default'],
/* ... */
'timeout' => 60 * 5, // always lower than retry_after
],
/* ... */



Explanation: Think of the `retry_after` value as something global control, a process that every so often checks if there are any jobs still queued. While `timeout` is precise and is applied at the time of launching the job (It is precisely the `--timeout` flag of the horizon work command). Therefore, `timeout` should always be smaller than retry_after. And `retry_after` should be the maximum value that any job on that connection takes. (More on [Job expiration on official documentation][1]).


👉 Is not mandatory, but if you don't want to give high times but you have particularly long jobs, use an exclusive queue with more time (check [this issue][2]).

👉 Also, check if you don't have any infinite loops. For example, related with Model observers. Some times Model1 observer touch a Model2 and fire an observer. That Model 2 observer touch again Model 1, and Model 1 observer is fired again. You never receive specific error log about this situation, only a "has been attempted too many times..."

[1]:

[To see links please register here]

[2]:

[To see links please register here]

Reply

#10

[To see links please register here]


You can set the number of seconds allowed per each job before an exception

php artisan queue:work --timeout=300

Reply



Forum Jump:


Users browsing this thread:
1 Guest(s)

©0Day  2016 - 2023 | All Rights Reserved.  Made with    for the community. Connected through