Now Featuring!

By
Nick Borrell
November 27, 2018

Recently I was asked during an interview how I would approach adopting new features in Oracle as part of an upgrade.
At the time I feel I gave a pretty cursory reply and I just explained how I’d performed an upgrade (migration) across platforms and had implemented new features as part of that process, having researched them beforehand. On reflection, I wish I’d given a more qualified – and considered answer; so here goes.

I started using Oracle 6 as a junior dev, and my very first project was a database migration to Oracle 7.3 on a new platform. I was part of a small team that maintained a heavily customised off-the-shelf application. The application was mission critical (they almost always are, aren’t they?) and there were going to be significant changes to the interface layer (Oracle Forms 2.3 needed to go to Oracle Forms 3 – we had to deal with the dreaded “trigger steps” conversion), the reporting tools needed to be changed, a new hardware platform had been selected – we went from Sequent (Dynix) to Sun (Solaris). And, of course, we were upgrading the underlying database version.
As I recall, the project seemed so challenging that our DBA opted to leave, rather than work on it (it may have been coincidence though); on top of all various layers that we had to manage changing (and testing) the incumbent DBA felt that there were just too many new features to keep track of in the next version of Oracle. (He thought it was a better product before Oracle had tablespace management more complex and implementing security using roles just seemed unnecessarily complicated too.)
Although I was very much a junior at the time, I was quickly sent off to do the Oracle DBA course to be a critical part of the project. The key things I took away from that project were an appreciation of how a very daunting project could be broken down into manageable tasks. Once we had it broken into chunks, testing the individual chunks was trivial. I still remember the terror of seeing all the errors from the first “dry” migration – but resolving those errors just became one of the tasks. I also remember the satisfaction afterwards of thinking “people said it couldn’t be done, but we did it!”. For this particular project, although we had an eye on the new features, we very consciously decided to stay away from using them until after we’d been using our upgraded system for a while.  (I recall the first thing we tried was the new-fangled “stored procedures”).
More recently I worked on a project to upgrade an Oracle data warehouse to a later version of Oracle, but also to migrate it to an Oracle Exadata – we were changing CPUs and immediately ran into a change in “endianess” which influenced how we could move the data.
The Exadata is rich with features, and some of these needed to be implemented as part of the migration. In this case, the project plan included time to investigate some of the features before the migration (specifically the HCC compression) so that we could implement the correct “flavour” of compression for the various kinds of data we needed to store and the kinds of operations that needed to be performed against the compressed data. We also reorganised the tablespaces – they’d grown unwieldy in the old system and there was a strong case for reducing the number of them. We implemented a new backup and recovery regime at the same time. One project requirement was that the migration also had to be repeatable – so it could be blown away and repeated as often as necessary (which was important when we decided to adjust the implementation of some of the new features). Because I was reorganising the database using Oracle Datapump, I used parameterised Unix shell scripts to manipulate the generated DDL files to move objects around and/or to compress some objects as they were be created and their data was imported into the new instance. It was a largely “lights out” affair – in fact on the morning that I started it going, I was using my laptop tethered to my cellphone from a campsite 30km out of the city.  I received emails from my Unix scripts on its progress and only went in to work at the end (since everyone wanted to be there). I recall we found two issues on our first run – but these were relatively minor – one being a consequence of a decision to move to a multi-byte character set and the other being a network performance issue that we “fixed” with some creative thinking.
What did I learn from this migration? We found afterwards that all though we’d managed to speed up our backup time, with a little more thinking we could have made it faster – so that was one area where more investigation and research would have been useful. Overall, I was very pleased with the “repeatable” nature of the migration – it meant we could iteratively refine it during development and we understood the timing of all of the steps well ahead of the actual “go-live”. My shell scripts were driven by config files, so it was easy to make changes when required. I’d certainly do an upgrade or migration that way again, if the opportunity arises. It was particularly useful when we investigated different block sizes for the database – there are some decisions you normally only get to make once with Oracle (at database creation), but because we had a repeatable process we had the ability to test some of these decisions.
How would I go about adopting new features in Oracle?


I’d have to start by making a caveat: just because a new feature exists, certainly shouldn’t mean that you must find a way to use it. On the other hand I’ve worked with DBAs (and developers) who hate change and refuse to consider new features – point blank. If you don’t like change, I’d say IT probably isn’t the right game for you. With Oracle in particular (well any big RDBMS), there’s a constant pressure to ensure an organisation remains on supported version (no business wants to being paying for extended support). Management will love you when you can confidently implement a new feature that improves the organisation’s ROI. RDBMS aren’t cheap and businesses like to know that their getting value out of their investment.
I’d suggest that it’s important that you stay on top of the “New Features” PDF that is published with each new version of Oracle. I’d even suggest that you refamiliarize yourself with the Oracle Concepts document from time to time too (you might have forgotten something valuable that you used to know). Additionally, things can change fundamentally between versions of Oracle and the Oracle Concepts guide is a really good way of spotting those changes – occasionally, something that was once best practise becomes sub-optimal. One case in point was when the default optimiser changed from being Rule-based to Cost-based – and some PL/SQL and SQL degraded significantly (or missed out on being able to execute faster) because the SQL was heavily laden with Rule-based hints. I know that’s a long time ago now, but it’s the first example that occurred to me.
I’m used to working in organisations that prefer not to be the early adopters. They like to see a new version “ironed-out” before they move to it. But that doesn’t stop you from downloading it and, at the very least, investigating the new features without impacting any production systems. Better still, if you have the resources, I’d create a “shadow” dev or test environment especially if you’re able to get something like production volumes of data to test the features against. It may not still be the case, but apparently the terminal release of one version Oracle will often have, at least some of, the features that will appear in the next version (I wouldn’t implement those in a production environment though, you might find there’s no support for whatever goes wrong). The more time you have to experiment without pressure, the more likely you are to use it effectively when the real need arises. At this point I come back to the columnar-compression features of the Exadata – although it was tempting to use the maximum compression across a range of tables, it soon became apparent that there was a real downside to updating the compressed data – so using maximum wasn’t the answer (for all the data). In fact, we redesigned our database in order to use the maximum compression on data that was unlikely to change and used a different compression level on the more volatile data.
Anyway – that’s what I should have said when I was asked about upgrades and new features.
Slàinte mhath, Nick.

Nick with a ‘k’ is a new-ish addition to the OptimalBI whānau but by no means to business intelligence, with over 20 years of consulting experience as a data warehouse developer and administrator.

Connect with Nick Borrell on LinkedIn or read his other blogs.

Copyright © 2019 OptimalBI LTD.