Performance with DB growth?

I'm working with a DBMS which currently contains about 7M nodes and 12M relationships. The job which adds a day's data started out running about 17 minutes. It has indexes on all the major properties involved in MATCH and MERGE operations. Over a couple months that run time increased to 45 minutes. I've been intending to analyze it to see if I need additional indexes.

Due to a minor data corruption, I needed to migrate to a new DBMS. I exported the data with apoc.export.cypher.all and imported it through cypher shell. I see that the generated cypher includes all the same indexes. So in theory the two DBMS's should be identical, except for the corruption.

On the new DBMS, the daily data import job run time again takes 17 minutes.

If the run time increase were due to needing additional indexes, I would expect to see the same run time in both the old and new DBMS's.

So... is there some sort of optimization or defragmentation that needs to be performed on DBMS's to maintain performance?

Community Edition 4.4.3 on MacOS Catalina