[Vtigercrm-developers] Time Interval workflow exhausts memory
Rubén A. Estrada Orozco
rulotec1 at gmail.com
Fri Feb 12 16:49:31 GMT 2021
I've opened the issue Uma:
However, now that I think about it, I'm not sure if a memory exhaustion
error can be notified. I think that kind of error would cause an immediate
On the other hand, I don't think the db problem is just a matter of
inefficiency in the sense that more memory is used than necessary per
query. I think there is a memory leak issue. Please refer to this
where a batch update using vtws library causes memory to gradually increase
up to exhaustion.
On Fri, Feb 5, 2021 at 6:55 AM Uma S <uma.s at vtiger.com> wrote:
> Hi All,
> In my opinion on this scenario, I would agree with Mohan's point with
> PearDB data fetch.
> >>I think the issue is with PearDB and Create/Update API as (if I recall
> correctly) at each new call of the api it would add 8+ mb..
> Yes! this could be related to the result set fetch through PearDatabase.
> Where each data retrieved will be having 2 keys in the result,
> one is an integer and the other is a column name key. Which can consume
> more RAM when more data is retrieved in a single request.
> [fields] => Array
>  =>
> [firstname] =>
>  => test again lead
> [lastname] => test again lead
>  =>
> [title] =>
>  => 0
> [accountid] => 0
>  =>
> [email] =>
>  =>
> [phone] =>
>  => 1
> [smownerid] => 1
>  => 6
> [contactid] => 6
>  => 0
> [starred] => 0
> Solution would be to find a way to fetch data with a one format, Either
> integer or column name.
> We will be reviewing this further to come up with a solution, which adds
> value to performance.
> >>Also, it would be a good idea to send an email to the admin user upon
> vtigercron.php execution failure.
> Ruben, we will review with this memory exhausted error to notify the user
> on cron job. Please do file an issue on same.
> On Fri, Feb 5, 2021 at 2:38 PM Alan Lord <alanslists at gmail.com> wrote:
>> On 05/02/2021 08:50, Sukhdev Mohan wrote:
>> > Increasing memory doesn’t solve at all, already tried it, at least in
>> > case didn’t solve had to divide and rule hopefully it has to be done
>> > once in a while). I think the issue is with PearDB and Create/Update
>> > as (if I recall correctly) at each new call of the api it would add 8+
>> > mb.. I can test this weekend and share more info. If you have any
>> > insight or experiences please do tell.
>> Because I haven't had this particular problem before I have no
>> experience of it to share.
>> There have been occasions however, where I have found I needed to
>> increase RAM or switch to a cli sapi (no RAM limit) when doing mass
>> record import or whatnot - but I am talking hundreds of thousands or
>> millions of records. And generally this was more down to
>> capacity/performance limitations of the h/w. Writing a shell script and
>> iterating over the task using smaller blocks has also been a solution
>> for me. But in Rubén's case this is not possible as it is a scheduled
> Best Regards
> Vtiger Team
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the vtigercrm-developers