Sticking with my original example of germ theory my contention is that because this particular theory so far is confirmed by every test used to validate it and because so many successful medical treatments and public health strategies have been devised using it, I think it’s pretty safe to conclude at this point that germ theory is far more right than it is wrong. To the extent it may be wrong it isn’t wrong in any significant way that would dramatically change our understanding of the various pathways viral and bacterial agents follow to spread infection. We may not always know immediately which of the available means of transmission an infectious agent is taking advantage of (airborne, water, bodily fluid, etc.), but I wouldn’t want any scientist working on the problem who didn’t start with the premise that it was going to end up being one of these. To the extent germ theory is “relatively wrong” (to use Asimov’s phrase) we can now say with very near complete certainty that any wrongness is at the extreme margins of the theory and not with its central framework.
Regardless, my original point was that “we have no way of knowing” how far we are from everything there is to know about either a specific thing or wider phenomenon. We can know something about an object or phenomenon and still be ignorant about certain aspects of it. We may even know there is something about it we don’t understand or, alternatively, be completely ignorant of our own ignorance about aspects of it. That said, my point wasn’t that we have no way of knowing anything at all. Nor, I think, was that Asimov’s point. We clearly do have the capacity to know certain things about the world.
As for “all models being wrong” but some being “useful,” I would contend that what renders a successful model useful is sufficient (as opposed to absolute) correspondence with reality. In other words, an accurate working model need not be right about every possible/actual aspect of the phenomenon it is being used to describe. It need only be right about enough of them — or relatively less wrong (or more right) than the alternative models/explanations out there. That this requires the model to reflect a degree of knowledge about the thing it is describing is, I believe, self-evident.