It doesn't have to be perfect

There's some weird assumption that many people both inside and outside of the computer industry have that an algorithm needs to be perfect to be useful.

Yes, you do want to minimize the false positives and false negatives as much as possible, but you can also factor that into the cost of doing business vs. the benefits of the automated first step. Heck, that whole sentence was predicated on a Boolean outcome. I'd posit that it's actually more important to have a very accurate degree of uncertainty assigned to an answer than the right answer itself. I.e. if you can say that 80% of your results are 100% certain, you know you only have to double-check 20% of the results by hand. That's better than saying you have a 0.5% false-negative rate - which implies you need to double-check 100% of your results if mistakes are not an option!

Take advantage of keeping humans in the loop to iterate with their tool - the software. Humans aren't perfect either; we're slow but massively parallel processors that get things wrong a phenomenal amount of the time, but we have lots of error-catching systems in-place from our brains to our bodies to the society and world we've constructed around ourselves.