2
votes

Reading Ulrich Dreppers "Shared Lib Howto" I came across the strange (for my understanding) fact that applications which use shared libraries are loaded in two steps. First the Kernel loads the applications image, then it adds the dynamic linker-loader binary into the address space and passes control to it. The dynamic linker-loader runs in user space, supposedly within the time slice of the application and pulls in the rest of the code or links the references to already loaded shared objects. Was this the idea (i.e. to restrict runtime consumption) why such a complicated scheme was chosen?

1
Where else would it run? In the kernel? What would be the benefit, given that running in the kernel would come with downsides including more security concerns.John Zwinck
Then why letting the kernel load the primary image in the first place? And yes, there would be benefits and downsides - I asked why it is this way, not which load of questions you could come up with additionally ;)Vroomfondel

1 Answers

0
votes

why such a complicated scheme was chosen?

Because it's less complicated than the alternatives.

In particular, it allows development of GLIBC and dynamic loader without rebooting, it allows multiple versions of the GLIBC loader to coexist on the same system, and it allows GLIBC to coexist with other libc implementations (which will have their own dynamic loader).

why letting the kernel load the primary image in the first place?

The kernel has to find and read the primary image in order to extract PT_INTERP from it. I am guessing that leaving it in memory was less work than unloading and letting the interpreter re-do the work, and obviously also faster.