It has a real impact in the speed of report execution ...
From my initial research into this issue, it appears that the underlying driver for both of these is the same, there for the core of the JDBC connection will be very similar.
I will continue to ask around to get a better idea of the exact difference is, but in the meantime can you please elaborate on what your current system is and what the difference in performance you are noticing?
We are migrating a traditional operational system we built from a virtual machine to the full Azure cloud environment utilising app service and cloud database.
When running the same reporting type query in the azure environment with the SQL connector enabled the query is decidedly slower then running the same query with the azure connector enabled.
I am.trying to determine what the difference is as I noticed the same thing you mentioned that the driver is the same, as i need a long term sustainable solution to my reporting requirements after the azure migration.
Hope that all makes sense.
After asking around a bit it appears that the primary difference between these two, is the encryption argument in the connection string when using the Azure driver. I am not sure how this affects performance but my guess is that without this, Azure needs to do a bit more work processing the request.
In the end if you are experiencing better performance with the driver specific to your data source, I would stick with that. Please let me know your thoughts.
Thanks for the feedback Nathan.
Happy to go with the connector that performs better, but I am having some issues with what seems to be connection time. Azure closes a connection when there is inactivity which could happen during the 3 hour yf default period. Surely YF should check for a connection with Azure before trying to run a query, and establish if needed ?
Could you confirm how the process works ?
In addition to the timeout mentioned, Azure will also run your queries through a connection that only has as much memory allocated as it thinks you need, when it becomes obvious that you need more, the connection will temporarily drop, and you will be assigned to an environment with more memory.
Fortunately both of these can be fixed with a setting in Yellowfin called "Volatile Data Sources". Essentially this tells Yellowfin to check the connection every 30 seconds, and if the connection has gone bad, to re-establish the connection. Here is a complete article detailing what it does and how to enable it:
Apologies, I provided you a bad link, here is a better one:
Thanks Nathan - does this mean that the volatile data sources functionality is enabled for all data sources that we have setup in YF, or are we able to select which data source is the volatile one ?
I have 10 or so local data sources which dont need to be checked regularly, and only 1 in Azure which would need this kind of check.
I have been running tests today with the volatile data sources activated with possibly worse results than before...
What else can we try to do to connect to an Azure data source reliably ?
Volatile sources should have no effect on the rest of your system. Can you please send me a copy a of your entire logs folder so that I can look into this a little deeper? This can be found in your Yellowfin installation folder (yfinstall/appserver/logs)
Thanks. The errors that you are noticing are specifically why volatile data sources was created, so if you are still noticing them, it may be worth checking to ensure that the rows were inserted properly:
Did you restart Yellowfin after making the changes? Another thing that may be worth trying is JDBC verify. This typically only applies to the configuration database (basically volatile sources for the config db), but I have heard cases where it was necessary to enable both settings for regular data source connections.
Additionally it appears that at about 15:30 system time, the frequency of these errors decreased dramatically, which makes me think volatile sources started working. What time did you enable this at?
Thanks for the response - we enabled the volatile sources thing this morning (between 8 and 9am), and restarted thereafter and again around 13:30pm.
I have also noticed that it seems to be more stable since mid afternoon, and will continue to monitor the results overnight. Depending on the outcome I will explore the JDBC verify tomorrow morning and advise.
Sounds good, lets hope its fixed!
The tests I have been running every hour have proved to be successful - I am testing on a differing frequency now to make sure we have a handle on it.
I will revert tomorrow.
Sounds good, let me know if there is anything I can do to help.
It all seems to be working as it should - I hope I dont jinx it!
You can close this ticket.
Sounds good! If you run into any further troubles, just reply and the case will be re-opened.
Comments have been locked on this page!