Using Yellowfin DB replication - Have a lot of entries in the ClusterPing table
Resolved
We have Yellowfin clustered using database replication.
Though we're noticing a growing number of stale entries in the ClusterPing table worry it will impact database performance.
Example of entry:
SEVERE [TransferQueueBundler,ClusterGroup-vZTg99ZqyIl/wt3hPw=,yellowfin-0-61500] org.jgroups.protocols.BaseBundler.sendMessageList JGRP000036: yellowfin-0-61500: exception sending bundled msgs: java.net.SocketTimeoutException: connect timed out
Why do we have these entries and how can we remove them?
These stale entries should not occur under normal circumstances as they should only occur if the application is not closed correctly (e.g. forced close vs shutdown), so don't expect this to be a common occurrence. We understand this can happen, but should never be a case where it's occurring so often that it's impacting DB performance.
However in a case where this is occurring quite frequently (e.g. testing/dev environments being played with), you could solve this in 2 ways;
Force the table to clear on startup.
1. Extract the 'RepositoryPing.xml' from i4-core.jar
2. Find the line: <com.hof.cluster.RepositoryPing/>
3. Change it to: <com.hof.cluster.RepositoryPing clear_table_on_view_change="true" />
4. Put this file in ROOT\WEB-INF\classes\com\hof\cluster
5. Restart Yellowfin.
Note: You only need to do this for one of the nodes.
Manually clear the table
Just clear the ClusterPing table directly.
If you have further questions on this, please let us know.
Regards,
David
and we're thinking you're probably seeing this on a dev/test environment that is being 'played' with.
Another option if you want is to just clear the ClusterPing table, which will repopulate itself with active nodes anyway.
These stale entries should not occur under normal circumstances as they should only occur if the application is not closed correctly (e.g. forced close vs shutdown), so don't expect this to be a common occurrence. We understand this can happen, but should never be a case where it's occurring so often that it's impacting DB performance.
However in a case where this is occurring quite frequently (e.g. testing/dev environments being played with), you could solve this in 2 ways;
Force the table to clear on startup.
1. Extract the 'RepositoryPing.xml' from i4-core.jar
2. Find the line: <com.hof.cluster.RepositoryPing/>
3. Change it to: <com.hof.cluster.RepositoryPing clear_table_on_view_change="true" />
4. Put this file in ROOT\WEB-INF\classes\com\hof\cluster
5. Restart Yellowfin.
Note: You only need to do this for one of the nodes.
Manually clear the table
Just clear the ClusterPing table directly.
If you have further questions on this, please let us know.
Regards,
David
and we're thinking you're probably seeing this on a dev/test environment that is being 'played' with.
Another option if you want is to just clear the ClusterPing table, which will repopulate itself with active nodes anyway.
Replies have been locked on this page!