To me, Data and his fellow positronic androids were unique in the sense they were machines capable of thinking and making decisions for themselves rather than just following their programming without question. The Voyager
's EMH program learned to do this as well due to Lewis Zimmerman (unwittingly) sharing Noonian Soong's philosophy that artificial intelligences should have the capacity to learn and grow.
IMO, the real issue isn't whether Starfleet can push AI technology further, but should
they? In the moral problem presented in TNG's "The Measure of a Man," at what point does a AI cease being a device and becomes a sentient being with guaranteed rights and freedoms under the Federation Constitution? Would it be better to keep AIs from becoming too
advanced (from developing sentience and the ability to pursue their own wants and needs) to avoid them from being regarded as manufactured slaves?