Hello,
fortunately, the failure to purge nodes should not be noticable in normal operations. It will have the most impact upon indexing, specifically rebuilding indexes from scratch. If you are using Lucene instead of SOLR, the more "purgable" nodes you have in your database the longer it will take to rebuild the index - and due to the way those nodes are handled during index rebuild, the additional cost is substantial.
For every "purgable" node in the database, the index rebuild process will check the entire index if there is any reference to the node and remove it (even if the index is guaranteed not to contain it during a full build). This is the slowest possible operation during rebuild and can not be executed in parallel. So the effect is that your parallel rebuild is blocked while one thread scans the entire index.
In one of our customer projects we once had a count of about 100.000 "purgable" nodes compared to 1.8 million normal ones and the full rebuild took almost a day. The same rebuild without those 100.000 nodes was done in under 1.5 hours.
Note: This was on a 3.2/3.4 environment - I do not know if there have been changes in that regard.
Regards
Axel