This is a great starting point when having performance issues within Yellowfin. These steps can help you review some common performance tuning options and also provides tools to gather information to give support in the event you want more assistance.
What exactly is slow?
The first step is narrowing down the point at which Yellowfin sees performance issues.
- Logging in for the first time
- Running particular dashboards
- Running particular reports, perhaps specific to a View or chart type
- Caching report filters
- Logging in with a particular user
- Performance issues during certain times of the day
Identifying this can help point us in the right direction in determining the best options moving forward.
Is this a new issue?
Have there been any changes to your environment, such as new data sources, updates to the software, configuration changes.
Gathering System Info
By appending /info.jsp to your Yellowfin login URL, you can acquire a variety of system information. This includes your version and build number, your JVM Max Memory settings, how much memory is currently being used, and licensing information.
JVM Max Memory
JVM Max Memory is the amount of memory configured that Yellowfin can use. By default this is set to ~1GB. In most production instances you will want to increase this. There's no magic formula for determining this, however this is the most common performance bottle-neck. We do recommend a minimum of 4GB of memory allocated for production instances.
Increasing JVM Max Memory
It's possible to monitor this using a home-grown tool we have compiled. This tool is a Java program written to pull the memory information from the info.jsp page over a period of time. After clicking Stop the tool displays a graph of your memory usage over the time the tool was run. You can download the memory profiler here. Note that if you want to provide the graph to support for a ticket you need to screenshot the graph prior to closing it.
Memory Profiler Tool
High CPU Usage by Java or Yellowfin Freezes, Crashes, or Stalls
If you notice CPU spikes, or Yellowfin is crashing, we can examine what threads Yellowfin is running internally. Appending /info_threads.jsp to your Yellowfin login URL gives you what threads are running at a given snapshot of time. On it's own this isn't of much value, however if you're having CPU Spikes or crashing you should open a ticket posthaste!
This tool allows you to capture the info_threads.jsp results over a period of time specified in the settings. Running this with 180 pages over a 1 second interval during your performance issues will give us an idea of what Yellowfin is actually doing during your performance issues. Simply compress the folder full of these results and attach them to your Support ticket for analysis. We will usually ask you for this if we think it'll help.
The first thing to consider here is whether they share a commonality. This is more often a View or Data Source. The first step with this is to grab the report SQL generated by Yellowfin and run the query directly against the Database using a tool such as DBVisualizer. Does this time differ from the report generation in Yellowfin? If not, this could be related to your underlying dataset or configuration.
Appending /info_cache.jsp to your Yellowfin login URL will pull up the information on the various internal cache's your instance is utilizing. In example, if you're having slowness when entering the Browse page check the 'Event Cache'. If your Cache hits percentage is low and your Events cached percentage is high, you may benefit from increasing this value.
More on Internal Caching
Config DB Tables
Is the Yellowfin Configuration database abnormally large? This is usually best handled with Yellowfin Support if you suspect so.
Issues with Specific Data Sources
If you find the Data Source to be the commonality in report slowness it's worth reviewing the source logs for timeouts or errors such as no connection available.
Through the Admin Console you can access your Data Sources. The Connection Pool settings allow you to increase timeouts to a data source, increase the max connections, or enable a secondary connection pool for the data source.
Timeout Setting - Useful if your View or Report is returning a large query that takes an existential amount of time to return.
Max Connections - This dictates how many connections Yellowfin can open to this data source at any given moment. Generally 2 connections per expected concurrent user is sufficient.
Secondary Pool - This creates a secondary connection pool to this data source which is reserved solely for background processes such as broadcasts.
Are you consistently having issues with a data source over an unstable connection? Good examples of this are AWS, etc. You can enable volatile data sources to help mitigate these types of issues.
Have a look at your 'Schedule Management' page under 'Administration'. Are you seeing a lot of failed broadcasts? Or maybe one that hasn't successfully completed for a long time? Failing broadcasts can have adverse affects on performance.
When in Doubt
If you're not sure where to start or how to proceed, do not hesitate to open a support ticket. We are trained to help you through your issues, and most of us don't bite! We are here to help ensure your success with our product.
1 tool to rule them all!
The tools mentioned above have now been combined into a single simple tool 'Yellowfin Performance Snapshot Tool' which has also been attached to this article.