View Caching
Answered
During a view caching process I encountered an error
An error occurred loading composite data: java.lang.Exception: No type info for column ID_79356_79500 on view 79564
Are there specific tables and fields in the database I can refer to and get more information on that specific column? It's not clear why the original view works with the declared type but won't cache properly. I'm just not sure where to look and get more details about that field and type.
I have been looking for the same problem. Then I asked iCloud Support they said that there was a problem with the column ID. You should also try this and check that it gets fixed or not.
I have been looking for the same problem. Then I asked iCloud Support they said that there was a problem with the column ID. You should also try this and check that it gets fixed or not.
Hi Tara,
yes all columns are stored in the ReportFieldTemplate table, the column called "ColumnName" contains the values that the composite view creates in the form of tableName_columnName_999999, and the type is stored in the DataTypeCode column.
Having said that, let me say this, rather than use the Composite View functionality these days to combine different data sources, we definitely recommend to use either the Advanced Subquery feature, or the new Data Transformation feature. So please give these a try and let us know what you think.
regards,
David
Hi Tara,
yes all columns are stored in the ReportFieldTemplate table, the column called "ColumnName" contains the values that the composite view creates in the form of tableName_columnName_999999, and the type is stored in the DataTypeCode column.
Having said that, let me say this, rather than use the Composite View functionality these days to combine different data sources, we definitely recommend to use either the Advanced Subquery feature, or the new Data Transformation feature. So please give these a try and let us know what you think.
regards,
David
Thanks David. We are upgrading and testing out 7.4 and the ETL tools as per the suggestion. So far etl is great! I've created a simple test transformation and the output to the db was successful. The only issue seems to be the performance when I query that table in YF.
When I run a query directly against my transformation in the db the results come back in less than a second. When I run it via YF, it takes 19 seconds for the same query to produce results.
I've been looking at what we can tune. It looks like maybe increasing JVM max memory and cpu power might help. It also seems like our config db is abnormally slow for some reason (I can barely run audit reports) and I"m not sure why. When I run info_threads I see a lot of 'waiting' as I try to run this query.
Any thoughts on what I should take a look at to figure out why this query is taking long in yf but not the db? This is the latest yf version.
Thanks David. We are upgrading and testing out 7.4 and the ETL tools as per the suggestion. So far etl is great! I've created a simple test transformation and the output to the db was successful. The only issue seems to be the performance when I query that table in YF.
When I run a query directly against my transformation in the db the results come back in less than a second. When I run it via YF, it takes 19 seconds for the same query to produce results.
I've been looking at what we can tune. It looks like maybe increasing JVM max memory and cpu power might help. It also seems like our config db is abnormally slow for some reason (I can barely run audit reports) and I"m not sure why. When I run info_threads I see a lot of 'waiting' as I try to run this query.
Any thoughts on what I should take a look at to figure out why this query is taking long in yf but not the db? This is the latest yf version.
Hi Tara,
I notice you say you ran the query directly against the transformation in the db, I'm assuming this means that you ran it on the db server itself using a tool such as SQL Server Management Studio, if this is the case then this is not a fair comparison test because database tools within a DBMS connect natively to the databse instead of via JDBC like Yellowfin does, and also it resides on the same server as the database whereas most likely Yellowfin is on a different server (of course, if my assumption is wrong please let me know). A good comparison test is to get a database query tool like SQuirreL, DBVisualizer etc. and install it on the same application server that Yellowfin is running on and then connect to the database via JDBC.
When you do such a test and it proves that Yellowfin is running much slower than the report SQL, then the next thing to do is to use the info_thread facility as you have already done, but this time use the infothreadrunner utility and set it to take an info_thread snapshot every 1 second, do this for the full 19 seconds and then send me the resulting HTML pages for analysis.
Also, there are some features within Yellowfin that use up more resources than usual, some examples are crosstab reports, sections, advanced functions, and certain charts. This is because they require lots of java processing after the result set has been returned from the database. Thus, even though the report SQL may run fast when run directly against the data source, it may take much longer to fully render in Yellowfin. So with this in mind please tell me what type of report you are running that is so slow.
Regarding the slow config db, when databases get very large this will affect down performance, what size is it?
Also, what is the current setting for the JVM's max heap space, and please give the CPU specs.
regards,
David
Hi Tara,
I notice you say you ran the query directly against the transformation in the db, I'm assuming this means that you ran it on the db server itself using a tool such as SQL Server Management Studio, if this is the case then this is not a fair comparison test because database tools within a DBMS connect natively to the databse instead of via JDBC like Yellowfin does, and also it resides on the same server as the database whereas most likely Yellowfin is on a different server (of course, if my assumption is wrong please let me know). A good comparison test is to get a database query tool like SQuirreL, DBVisualizer etc. and install it on the same application server that Yellowfin is running on and then connect to the database via JDBC.
When you do such a test and it proves that Yellowfin is running much slower than the report SQL, then the next thing to do is to use the info_thread facility as you have already done, but this time use the infothreadrunner utility and set it to take an info_thread snapshot every 1 second, do this for the full 19 seconds and then send me the resulting HTML pages for analysis.
Also, there are some features within Yellowfin that use up more resources than usual, some examples are crosstab reports, sections, advanced functions, and certain charts. This is because they require lots of java processing after the result set has been returned from the database. Thus, even though the report SQL may run fast when run directly against the data source, it may take much longer to fully render in Yellowfin. So with this in mind please tell me what type of report you are running that is so slow.
Regarding the slow config db, when databases get very large this will affect down performance, what size is it?
Also, what is the current setting for the JVM's max heap space, and please give the CPU specs.
regards,
David
Ok, I will use a tool to see and get you those files. The report I was trying was just a standard tabular report.
The database shows it at 1.4 GiB, does that sound normal?
The current JVM max heap size is 8192 and on 4 CPUs
Ok, I will use a tool to see and get you those files. The report I was trying was just a standard tabular report.
The database shows it at 1.4 GiB, does that sound normal?
The current JVM max heap size is 8192 and on 4 CPUs
Great, that should give us some insight into what is going on over there.
The database size of 1.4 GB is fine, no problems there.
And the heap space of 8 GB sounds fine as do the CPUs (unless you have more than 50 concurrent users).
I await the info_thread snapshots.
regards,
David
Great, that should give us some insight into what is going on over there.
The database size of 1.4 GB is fine, no problems there.
And the heap space of 8 GB sounds fine as do the CPUs (unless you have more than 50 concurrent users).
I await the info_thread snapshots.
regards,
David
One other question about the ETL process that I couldn't find anywhere. We're trying to optimize the time on the initial output to SQL database. I've noticed the data set size plays a role but are there any configuration settings on the YF side that can help with that?
One other question about the ETL process that I couldn't find anywhere. We're trying to optimize the time on the initial output to SQL database. I've noticed the data set size plays a role but are there any configuration settings on the YF side that can help with that?
Hi Tara,
not really, the main resource overheads are taking up with creating the table and then doing all the those insertions.
Speaking of such, please be aware that even though Yellowfin provides the option of creating the new table for you, we recommend that your database administrator create the table that will be the target for the transformations, and assign the relevant security access. This of course is to give your DBA complete control over the data source (most DBAs prefer this) and configure things exactly as he/she would like, for example, whether a column is originally an integer or float then due to the many different possible types of DBMS (some of which have considerable differences with integers and floats and decimals), Yellowfin will always make the data type of the column in the output table a float, whereas your DBA might prefer that it remain an integer like the original column was.
regards,
David
Hi Tara,
not really, the main resource overheads are taking up with creating the table and then doing all the those insertions.
Speaking of such, please be aware that even though Yellowfin provides the option of creating the new table for you, we recommend that your database administrator create the table that will be the target for the transformations, and assign the relevant security access. This of course is to give your DBA complete control over the data source (most DBAs prefer this) and configure things exactly as he/she would like, for example, whether a column is originally an integer or float then due to the many different possible types of DBMS (some of which have considerable differences with integers and floats and decimals), Yellowfin will always make the data type of the column in the output table a float, whereas your DBA might prefer that it remain an integer like the original column was.
regards,
David
Hi Tara,
it's been a month now so I'm just wondering how you're getting on with this matter and whether you'd like the ticket closed or not?
regards,
David
Hi Tara,
it's been a month now so I'm just wondering how you're getting on with this matter and whether you'd like the ticket closed or not?
regards,
David
Replies have been locked on this page!