I should point out that my decision to go with ISAM was not taken lightly, even though this approach to database management is now considered passé (and is often regarded as abandoned).
An important part of my rationale is to remove external dependencies (especially dependencies on Microsoft proprietary stuff) - and to also get away from the tendency of RDBMSs to produce monolithic files.
For the type of data I am handling, both BerkleyDB and SQLite produce severe hits on potential performance, as well as both producing databases as monolithic files. Part of the reporting system needs to simply count through the (single) data table and to produce a line/page of report per record - which is blisteringly fast when decoupled from an RDBMS - a test run on a sorted, unindexed example CSV file of ~10k records took under 2 seconds to compile the typical reports - for later printing (or not, in this case).
The only non-text components in the system will be the image files, and the least readable data will be rich-formatted (either RTF or HTML) items such as chemical formulae, and the reports (RTF files).
The only truely slow activity in an ISAM model is the the routine housekeeping and re-indexing, and both of those can be optimised as a part of the record update process, as is proper.
Apart from indexes, the only other linked files are .memo files (for rich text and 'enormous field' data) and look-up tables (with, perhaps, a dozen options stored in each) - again, a waste of computing power when used in an RDBMS.
SB is supremely adapted to handle ISAM - deconstructing a record into an array in one line of code (and vice versa), and I do so love ISAM (but that could be because I used it so much when I was still a commercial programmer). It does seem a shame to waste that functionality by using an external product.