There's a whole confusing taxonomy of the various kinds of things your tests can test against which aren't "real", so I probably am not going to repeat that here. Many of the reasons I covered above indicate why you might want to have more or less real fakes.
One particular term I use a lot which I don't see covered elsewhere in the literature is verified fake.
When you write a library, you provide an implementation of the thing the library does. But if your library does I/O (makes an HTTP request, generates an HTTP response, pops up a window, logs a message, whatever), you've just introduced a new barrier to testing: callers of your library might want to test their code that is talking to your thing, and how are they supposed to figure out if your thing did what they wanted it to?
A good library - and the libraries that I maintain are struggling to be "good" in this sense, for the most part they're not - will provide you a real (i.e. not a fake, double, stub, mock, or dummy) in-memory implementation of their functionality. One of the best examples of this is SQLite. If you need to test code that uses SQLite, you just make an in-memory SQLite database and supply it; there's virtually no reason to fake out the database.
One step removed from this is providing a verified fake - an implementation of your functionality which doesn't do anything "useful" (like an in-memory SQLite database does) but nevertheless is verified against (a subset of) the same test suite as the real implementation, as well as providing an introspection API that allows test cases to verify that it did the right thing. This allows client code to import the fake from your library, test against it, and have a reasonable level of assurance that their code is correct in terms of how it's using the API. When they upgrade your library and its interface has changed, their tests will start failing.
Tests which use an unverified fake have a maintenance burden: they must manually keep the fake up to date with every version bump on the real implementation.
Tests which use a real implementation will then be relying on lots of unimportant details, and will be potentially unreliable and flaky as real external systems (even systems you might not usually think about as "external", like the filesystem, or your operating system's clock) have non-deterministic failure modes.
Tests which use a verified fake get the benefits of a unit test (reliability, speed, simplicity) with the benefits of an integration test (assurance that it "really works", notification of breakage in the event of an upgrade) because they place the responsibility for maintenance of the fake along with the responsibility for the maintenance of the interface and its implementation.