Possibly, or perhaps not. There's a statistical hump you cross as the number of engines gets very large, though I'm not sure it offsets the costs of having the parts count so high.

Since the Merlins can fail without causing a daisy-chain failure (unlike the Soviet N-1), you can think of them as each providing a thrust increment to the overall total. Designs that use much less than the nine on the Falcon can't complete a mission with an early engine failure because each engine's contribution to the total is too great to compensate for. The Falcon is marginal in this regard, with an 11% drop in maximum thrust per engine failure.

As the number of engines gets very large, say a hundred, you can launch with single and double failures without much affect at all, as it would only be a 1 or 2% drop in maximum thrust, well within design margins that have to include engine-to-engine thrust variations. So it's like evaluating the risk that a bad sparkplug will prematurely terminate an airplane flight. Is it a 4-cylinder Cessna or a 112-cylinder B-50?

If you crunch through the numbers on statistical failure rates per engine, versus the expected number of launch failures, launch success rates would start rising again as the number of engines becomes large, because the criteria changes from all engines working to some allowed percentage of engines working.

On a re-usable system, if the expected engine life is comparable to the total number of engines, this gives the huge benefit of allowing you to run the engines to the physical end of their service lives (with some swapping to make sure their ages are evenly distributed) by running each engine till it finally fails. If instead you have to make a very conservative estimate on each engine's remaining reliability, perhaps underrating the service life by a factor of two or three, you end up buying two or three times as many engines as you actually needed just to reduce the possibility of an engine failure.

Another benefit you gain is much faster accumulation of engine reliability data. With the first test launch you get a statistical dataset of 81 burns. With the second test launch the dataset is 162 burns. That's a better data set than we had on the SSME's 50 launches into the Shuttle program. Very early in such a system's life, you could probably stop doing engine tear downs and inspections between launches because you'd have a good handle on the expected failure rates, and the large numbers of engines means you expect small numbers of engine failures as a routine part of operations.

Offsetting all this, of course, is complexity. If you're using ten times as many engines, you had to perform ten times as many engine assembly operations. Of course if the assembly gets vastly more automated because of the bigger production run, that factor might go away. I think one thing that's inhibited the move to large numbers of engines (massive parallelism) is that we build engines largely by hand (though all the machining is automated), and the same crew can build a really big engine or they can build a really small engine in about the same amount of time, so the cost doesn't scale at all linearly with thrust.

ETA: I should dig up my statistics on this. It's pretty easy to get to the point on a hundred-engine system where you almost always expect one engine failure, often two, very rarely three, four is almost unheard of, five won't happen in decades of frequent flight ops, and six or seven is rarer than being struck by an asteroid. It would take eleven to hinder a launch as much as an engine-out on a Falcon-9.

Of course what eventually would get you is metal fatigue, somebody forgetting a wrench, or a guy uploading the wrong version of flight control software.