BMC - Slowness observed with a single tenant in a multi-tenant environment

Bharath Kumar shared this question 6 years ago
Answered

Hello Team,

I have a a strange issue. We have a multi-tenant environment with 8-10 tenants. The report performance is good for all the tenants except a single tenant. We did the following test:

Test 1: Detailed report for tenant "Tenant A - TDS" --> Result: Report delivers 193 records and finished in 9 minutes and 15 seconds

Report Filter: Submit Date from 2017-09-01 until 2018-05-31


test1_yellowfin.log:


BMC:SR:2018-05-22 10:52:20:DEBUG (MIReportGeneratorProcess:notifyListeners) - BackgroundRunner: Details: Queued: 10:43:05 -- StartedRunning: 10:43:05 -- Completed: 10:52:20


Repeat Test1 with Filter Submit-Date from 2017-10-01 until 2018-05-31


Result: Report delivers 3 records and finished in 10 seconds


BMC:SR:2018-05-22 10:53:03:DEBUG (MIReportGeneratorProcess:notifyListeners) - BackgroundRunner: Details: Queued: 10:52:53 -- StartedRunning: 10:52:53 -- Completed: 10:53:03


Test 2: Detail report for Client-Organization "Tenant B" --> Result: Report delivered 159 records and finished in 1 seconds


Report Filter: Submit-Date from 2017-01-01 until 2018-05-31


test2_smartreporting.log:


BMC:SR:2018-05-22 11:04:59:DEBUG (MIReportGeneratorProcess:notifyListeners) - BackgroundRunner: Details: Queued: 11:04:58 -- StartedRunning: 11:04:58 -- Completed: 11:04:59


Summary:


There are no performance problems with YF Client-Organization "TenantB".


Performance of Client-Organization "TenantA" is very poor and depends a lot on the amount of delivered records.


I did check the event and documentdata tables and they have only few thousands record


Do you have any recommendations for me?


-Bharath

Replies (11)

photo
1

Hi Bharath,

I imagine that because they are different client orgs that they have their own different data sources, so maybe the difference lies within there. Also, I'm not sure if you are using Client Source Substitution, that would be interesting to find out, so please let me know. And also it would be good to know whether you are using Client Reference ID Access Filter, please let me know.

A very good diagnostic test in this situation is for you to use our InfoThreadRunner utility and set the frequency to one info_thread snapshot per second, and then run it for the duration of the slow report in Tenant A, and then zip up and send us the resulting HTML files and we will analyse them.

And another good test is if you copy the report SQL from that slow report and run it on a 3rd party DB tool such as DBVisualizer, SQuirreL etc. on the same server as smartreporting (i.e. don't run it directly on the database server as that is not a good comparison) and see how long it takes to bring back the 193 record result set.

regards,

David

photo
1

Hi Dave,

Thanks for your response. We are using only a single data source connection and issue is observed only with a single tenant.

I will look into the provided thing and see if that helps.

Regards,

Bharath

photo
1

OK, I will await your batch of Info_thread snapshots, and also I await to hear how the 2nd test went (i.e. running the report SQL on a DB tool on the Yellowfin server)

thanks,

David

photo
1

Hi Dave,

Greetings!


We are not using Client Source Substitution and Client Reference ID Access Filter.


We have also run the report sql in db visualizer several times from YF server, no issue with DB, the result came in few seconds.


Attached the info threads logs.


-Bharath

photo
1

Greetings Bharath!

thanks for doing that dBVisualizer test, and also thanks for the info threads.

I have fed them into the InfoThreadParser (a java utility that I created) and created a short video (1 min) for you of the results so that you can see what the threads were doing during those 90 seconds.

In the video firstly I draw your attention to a few features of the utility. The first text field shows that all 90 info_thread snapshots were loaded, then we observe that the thread called "http-bio-8181-exec-19" was present during the whole 90 seconds. Then down the bottom of the utility we can observe that the Info_thread page number is at the beginning, 0, and so is the timestamp, 13:00:02.

Then I click the ">" button to advance one-by-one through the 90 snapshots. You can see that in the text area that shows the stack trace on the right hand side that the Yellowfin code is highlighted in yellow (all Yellowfin classes commence with com.hof)

So as we advance through the info_thread snapshots we can see that Yellowfin has sent a query to the BMC arsys driver:

com.hof.util.DBAction.doSelect(DBAction.java:880)
and is waiting on BMC to complete. Then throughout the whole 90 seconds, Remedy doesn't complete and Yellowfin is left waiting.

I noticed that in the majority of info_thread snapshots there is almost no change in the BMC part of the stack trace except the 1 change in the method:

com.bmc.arsys.jdbc.framework.ARAPIClient.getDefaultView(ARAPIClient.java:513)
which alternates back and forth with the following other method:


com.bmc.arsys.jdbc.framework.ARAPIClient.getListViewObjects(ARAPIClient.java:480)
So to me it looks as though the delay is due to the BMC side of things. However, I am always open to others' opinions so I think it would be great to hear what you and your colleagues think about this if you show them the video.

(By the way, if you would like a copy of the InfoThreadParser just let me know and I'll send it across to you, then you would not be reliant on the video).

Also, I noticed that the query that Yellowfin has passed onto the ARSYS driver was generated from the method MIViewProcess.getPreviewData, so I would be interested to learn whether setting the Default Data Preview option (in Administration->Configuration->System->Views) to None would improve the performance at all?

regards,

David

photo
1

Thanks Dave, you've probably missed to attach the video. Also, I have tested the Default Data Preview option for the Views and it did not help :(

Your video will help me investigate this internally.

Thanks a lot again!

Regards,

Bharath

photo
1

Hi Bharath,

silly me, yes after all of that writing about the video then I forgot to attach it!

Thanks for trying the Default Data Preview option, pity it didn't help!

Now that I've actually attached the video, I'll be interested to hear your and your colleagues' opinion about the progression of the stack trace over the 90 seconds.

regards,

David

photo
1

Thanks Dave for all your help, I am working on this internally. Anyway I will update this thread once this is fixed.

-Bharath

photo
1

OK Bharath, will keep this open and await your response...

photo
1

You can close this thread, we are working with the developers :)

photo
1

OK Bharath, thanks for letting me know.

regards,

David

Leave a Comment
 
Attach a file