Tech Blog

Automating Database Updates in Spring Integration Tests with Liquibase

Posted by James Cox

Mar 9, 2014 4:32:48 PM

The What

The goal of this post is to show how you can leverage your database change management process to keep your test schema up to date. I've been using Liquibase as a database change management tool for quite awhile. It makes migrating database changes to a new environment painless and repeatable. If you are not using it yet, I recommend giving it a try.

I'm also a big fan of using Spring's embedded database testing construct. With a few simple lines of configuration and a SQL file, you can create an actual database to use for integration testing.

The Why

After manually updating my test schema for the 20th time by using Squirrel to extract the table definitions and foreign key relationships, I began to think there might be a better way to do this. Sure, it takes less than ten minutes to do it this way, but that is ten minutes I could spend doing something infinitely more interesting.

What if we could use Liquibase to keep the test schema in sync with the changesets we already use for database change management? If we applied the changesets to our test schema each time the integration tests are run, we would also have the added benefit of testing our database migration process.

Workflow

A baseline schema and the contents of the DBCHANGELOG table is created from the current database. An ideal time for this to occur is after a release or at the end of an iteration.

In the Spring XML Configuration file, an embedded database is created and populated with the contents of the DBCHANGELOG to prevent changesets that were created previous to the baseline from being applied. Liquibase will update the embedded database with the baseline changeset and then all of the subsequent database changesets that have been added to the project.

In our Spring Integration Test, we use a ResourceDatabasePopulator to add any additional data required by the Test.

Implementation

We first need to capture a baseline and an export of the contents of the DATABASECHANGELOG table. We could do this manually, but this does not seem to be very automated. Instead we can use Liquibase to both create a basechangelog from our existing database and dump the contents of our DATABASECHANGELOG table with the changelogSyncSQL Liquibase command. I've put this functionality in a gradle script available in the Sample Project.

Creating the embedded database is easy using the Spring XML Configuration. We add the changelogSync file to our empty database.


<jdbc:embedded-database id="dataSource" type="H2" >
<jdbc:script location="classpath:changelogSync.sql" />
</jdbc:embedded-database>

We also can run liquibase directly from the the XML test configuration:


<bean id="liquibase" class="liquibase.integration.spring.SpringLiquibase">
<property name="dataSource" ref="dataSource" />
<property name="changeLog" value="classpath:/consolidatedChangelog.xml" />
<property name="defaultSchema" value="demo"/>
</bean>

The Sample Project and Code

I've created a working demo project on GitHub.

The project is an implementation of the classic Student/Courses datamodel. In version 1 of the project, we create a Student and a Course table. Each Student record has a Foreign Key to the Course table.

liquibase_demo_1 (1)

After completing version 1 of the project, a baseline schema and changeLogSync file were created with the command:


gradle -Penv=local liquibaseCreateBaseline

Both of the output files were put into src/test/resources. In release 2.0, we realize that our datamodel is a bit naive, as it only allows a student to be enrolled in one course at a time. We add some changesets to re-model the relationship to be Many To Many.

liquibase_demo_1 (2)

Running the CourseDAOTest will create the schema from the baseline we created at the end of version 1. It will then add the contents of the DATABASECHANGELOG to prevent the changesets we created in version 1 from being applied. It will then apply the version 2 changesets that add the join table STUDENT_COURSE.

Finally, we add some sample data and exercise the DAO method.

The sample project requires gradle to build and run the tests. An eclipse project file can be generated with the standard gradle eclipse plugin:

gradle eclipse

Additional Tips and Tricks

It's a good idea to keep your embedded database lean to avoid inserting large amounts of data into it. One way to do this is to use a liquibase context that will not run data oriented changesets against the embedded database.

<changeSet id="production course and student data" context="non-testing" author="jcox">
I've added additional Liquibase targets to the sample project's build file. This should make running Liquibase from the command line a snap:


gradle -Penv=local liquibaseUpdate

Additional environments can be added to the config.groovy file.

What's Not Covered

Using an embedded Database works well if you have a small to medium sized data model. Using very large data models in an embedded database becomes impractical.

In the sample project we used an H2 database as both the production database and embedded test database. In real world scenarios you will most likely be using mysql or Oracle as your production database. When dealing with multiple database types you will need to pay attention to database compatibility issues; using SYSDATE in changesets will cause problems.

Finally, we've neglected the issue of keeping test data in sync to the changes we've made. But I think that is deserving of another blog post...

 

Topics: Agile and Development

Subscribe to Email Updates