guidelines to script Yellowfin DB

Nitin Parihar shared this question 6 years ago
Answered

I am looking for suggestions and best practices around scripting Yellowfin DB.

as we move from one env to other we usually backup the yf DB and restore. we are trying to move away from this towards a more automated approach. we basically would like to script the out of box yf DB that has all our base reports.

are there any known issues around scripting the Yellowfin db and then recreating it?

any tools you would suggest?

are other enterprises doing this?


Thanks

Nitin

Replies (3)

photo
1

Hi Nitin,

I don't see the difference between backing up a Yellowfin database and scripting one. For example, you can backup a MySQL database by using the mysqldump utility, and the end result is that you have an SQL script of the MySQL database. The same with a PostgreSQL database, you can back it up with pg_dump, and the end result is an SQL script of the database.

1) So with that in mind, I don't know of any known issues around scripting/backing up a Yellowfin db and recreating/restoring it.

2) Most databases have their own tools for generating a backup script, however you can also do it with some 3rd party DB query tools.

3) it is very common.


Apologies if it turns out that I have misunderstood your idea about scripting a database, and if that is the case then please let me know in more detail what you mean by it.


regards,

David

photo
1

Hello Dave,

Thanks for your response. I am team mate of Nitin who posted this query.

We use both MS SQL Server and Oracle YF databases (v7.3). Generally this is how we create scripts of the database. Please let us know if you see any issues with this approach. From your comment above, I think this should be okay.

1) We use a third-party tool to generate SQL scripts for all the DB Schema objects - table, views, stored procs, functions etc from the YF DB

2) We use a third-party tool to generate SQL insert script for all the tables in the data bases. This script will include one insert statement for each row.

a) Are there any tables which has data types like BLOB that cannot be scripted out?

b) Are there any tables, whose data need not be scripted because it contains only transaction/session data and not useful to be reinserted in the target database? If there are such table that can be skipped, please let us know? This will reduce the number of "insert" scripts that we need to generate

3) The standard Yellowfin installer seems to create the database on the fly during installation. Does this process use SQL scripts? If so, can those script be shared so we can refer while we script out our customized YF reports DB?

Thank You

Prashanth

photo
1

Hi Prashanth,

I don't see any issues with your approach.

a) there is one column that has a blob, GeoPackLevelEntityGeometry.Polygon

b) yes, definitely you do not need the Event and EventArchive table, and also the ActiveSession table. Even the ReportInstance table you don't need because that just holds the results of past executions of reports.

3) No, the standard Yellowfin installer doesn't use SQL scripts, it uses an xml version of the schema (see attached). You can find this in the installer in the location:

com.hof.dataload.yellowfinschema.xml


regards,

David

photo
1

Thanks, Dave.

This is good info for us to proceed further. We will get in touch on this thread if we face any issues.

photo
1

OK, we'll be interested to hear how you get on. Good luck!

photo
1

Prashanth Patali,


I understand that this thread is now a few years old. Could you outline the approach you took to automate or script out the creation of the repository database for new implementations? If you created the DDL and DML for an initial configuration, would you be willing to share that with the community?


Thanks,

Lorenzo

photo
1

Hi Lorenzo,

Unfortunately, I don't have the scripts that could be shared here with the community. You could generate the DDL scripts through tool like SQL Management Studio (for MS SQL).

For generating data insert script, you have to probably look out for other tools. However we found maintaining data scripts to be a challenge as any change you do to definition of reports, views etc., there is no easy way to know which table rows are added or modified and regenerate the scripts.

Regards

photo
1

Hi, Simon.

We have a process for standing up Yellowfin within a Kubernetes cluster. The part that is missing is the ability to create a default repository database, with all the associated objects and data, such that it can be automated. Currently attempting to create a database using the JAR file hangs. Using the Windows installer does work for creating the database. However, neither of these are automatable. We need to be able to spin these K8s implementations up within a very short period of time with little to no manual intervention. We are moving forward with one production implementation using the Windows installer to create the database, but we are hesitant to move forward with our 50+ other implementations until this process can truly be automated and made repeatable.

I have reached out to our account representative and technical contact at Yellowfin, but have not heard back.

To be clear, we will not be manually attempting to convert XML to SQL when this is something that likely can be provided by Yellowfin to us as paying customers. We need to be able to automate this process. There is more than a single XML file to be converted. It looks well over a hundred XML files, as well as associated indexes and sequences.

Being able to rapidly deploy the repository database is critical to being able to serve our customers' needs for Yellowfin in our many production environments. The ideal solution would be to have all of the DDL and DML required to create the repository database in an automated way.

Thanks,

Lorenzo

photo
1

...

photo
Leave a Comment
 
Attach a file