Thread and memory leak "derby.rawStoreDaemon" with Report "Use As View"
Reports that are based on views created by saving a report with "Use As View" leave behind threads named "derby.rawStoreDaemon" when the report has an error (e.g. division by zero).
Steps to reproduce:
1. Run this in your database:
create table test_report_as_view_memory_leak ( x int, y int );
insert into test_report_as_view_memory_leak values (10,1), (20,2), (30,0);
2. Create a simple view that reads this new table
3. Create a report that reads this view, save with "Use As View" enabled
4. Create a report based on view from step 3
5. Add calculated field with formula "x / y", add it into the report
6. Check out "Output" screen of the report
7. Observe new hanging "derby.rawStoreDaemon" thread each time Output is clicked (report refreshed)
8. Observe heap memory usage going up and GC not being able to free it up
Hello,
We have just tried to replicate this with our systems here at Yellowfin.
We cannot get the system to error using HSQLDB or MySQL
Can you send us the Yellowfin log directory as this error is likely a symptom and not the root cause.
Best regards,
Pete
Hello,
We have just tried to replicate this with our systems here at Yellowfin.
We cannot get the system to error using HSQLDB or MySQL
Can you send us the Yellowfin log directory as this error is likely a symptom and not the root cause.
Best regards,
Pete
I can confirm that the bug does not occur when using MySQL as the underlying DB. The problem can be reproduced when connecting to Amazon Redshift using org.postgresql.Driver. I will try using com.amazon.redshift.jdbc41.Driver for Redshift instead.
I can confirm that the bug does not occur when using MySQL as the underlying DB. The problem can be reproduced when connecting to Amazon Redshift using org.postgresql.Driver. I will try using com.amazon.redshift.jdbc41.Driver for Redshift instead.
I can confirm the issue cannot be reproduced when using the com.amazon.redshift.jdbc41.Driver class for Redshift instead of the PostgreSQL one. After some bumps I was able to switch all our Data Sources to this class so this solves the problem for me.
I can confirm the issue cannot be reproduced when using the com.amazon.redshift.jdbc41.Driver class for Redshift instead of the PostgreSQL one. After some bumps I was able to switch all our Data Sources to this class so this solves the problem for me.
Hello,
Glad to hear that the driver shift has resolved this issue.
Best regards,
Pete
Hello,
Glad to hear that the driver shift has resolved this issue.
Best regards,
Pete
Unfortunately I have to report that over time these threads still do pollute the memory anyway. Currently I don't have a repeatable way to reproduce it, but in the thread dump there's 124 threads called derby.rawStoreDaemon after using YF for about 10 days.
Unfortunately I have to report that over time these threads still do pollute the memory anyway. Currently I don't have a repeatable way to reproduce it, but in the thread dump there's 124 threads called derby.rawStoreDaemon after using YF for about 10 days.
Replies have been locked on this page!