💾 Archived View for tilde.pink › ~kaction › log › 2021-11-04.1.gmi captured on 2023-01-29 at 16:55:55. Gemini links have been rewritten to link to archived content
⬅️ Previous capture (2023-01-29)
-=-=-=-=-=-=-
Most sources cite same list of pros and cons of dynamic linkage, they usually disagree in their evaluation of them.
https://drewdevault.com/dynlib
https://blogs.gentoo.org/mgorny/2021/02/19/the-modern-packagers-security-nightmare/
https://stackoverflow.com/questions/1993390/static-linking-vs-dynamic-linking
I want to point to one more disadvantage of dynamic linkage, or, to be more precise, the price developer must pay to maintain stable application binary interface. Stable ABI is essential to take advantage of signature feature of shared libraries -- being able to update library without updating its dependencies, but it takes it toll on interface design.
Let us consider signatures of following functions:
struct foo* make_foo(const char *); int make_bar(struct bar *, const char *);
Without looking at documentation, we can be quite confident that `make_foo` returns pointer to dynamically allocated memory, while `make_bar` accepts pointer to memory belonging to caller, which very well can be allocating on stack, which is simpler, faster and less error-prone.
Reason why many libraries (including libc) prefer to model their interfaces after `make_foo` instead of `make_bar` is binary compatibility. Adding more fields to `struct bar` breaks binary compatibility, adding more fields to `struct foo` does not.
Price of ABI-stable interface of readdir(3)
This is not the only example. Pretty much the whole point of "Pointer to the implementation" pattern (a.k.a wasteful pointer dereference) in C++ is to maintain stable ABI.
https://en.cppreference.com/w/cpp/language/pimpl
As can be guessed from the post title, I don't think any advantage of shared libraries worth such sacrifices.