1
votes

I've the need to run one Perl cgi as root. I already understand most of the security concerns of doing this but let me explain first.

The Perl cgi could run as the web server but would require sudo access to run some commands. This is what I've done first but this doesn't only allow that cgi to run these commands but the whole user running the web server. Also, instead of running commands with sudo, I would prefer to use native library that are way faster than running external commands. However, these native library requires root access for some of the operations.

So what I had in mind was to run this one single cgi as root (haven't really found how yet, this my main issue right now). The first thing I would do in the Perl cgi would be to change the effective uid $> / gid $) to a non-privileged user and only change it back to root when I need to call one of the native library requiring root access, then change it back to the non-privileged users.

So far, do you have any comments on this idea?

Back to the main question: how can I allow that cgi to run as root? I've taken a look at suexec but it doesn't seem to allow root Can't use setuid on a Perl script looking for some help/ideas here

Best regards,

Yannick Bergeron

1
This is a bad idea, which is why Apache's suEXEC doesn't allow it. One possible workaround is to run your commands in cron as root and simply read the results with your CGI. You haven't specified what commands you're running so that may or may not work for your case. - ThisSuitIsBlackNot
The CGI script is a REST server (using Mojolicious) which provide services on system middlewares. For example, it's being used to provide services on top of a clustered filesystem in which there are thousands of filesets. One of the command list all filesets and only root can run it. Running it with sudo is possible, but it's a lot better to use Perl XS and directly call the C subroutine in perl instead of running an external command and parsing its output. As one of the reason I'm looking to do this is performance, doing async operations or using cron is similar to using sudo... - burgergold
One possibility that I'm currently exploring is a binary wrapper such as: int main(int argc, char **argv) { execvp(argv[1], &argv[1]); } that would also validate it's being exectued as the web server userid and only for the cgi I'm targetting. That wrapper cgi would be have a setuid on it - burgergold

1 Answers

2
votes

I would likely try to investigate one of two options, both fairly similar.

The first is a job engine. Your CGI would do nothing but post a request to the engine queue. The client would come back and poll for results. Works well if you already have such an asynchronous queue set up. The engine itself would run as root to be able to run requests. Requests would not, of course, include actual commands to run - security is still a significant requirement.

The second option is a daemon. It would listen on an internal port (only on the localhost adapter, perhaps), and receive the requests. It would then return the results on the connection. If your CGI and daemon are both in perl, you could even just serialise via Storable, though I'd generally prefer JSON - it's not always bigger than Storable, and it's not generally a lot bigger than Storable. But it's fast, secure, and amenable to both reasonable tracing without modification and may also be good for sending back to the client without modification, depending on your needs.

Again, security - you don't send commands to run or SQL phrases, you send requests and parameters. But that's the same as your CGI code.

Both of these options require an extra process (or more). But they also avoid a lot of the overhead - you don't need to reinitialise the C runtime library each time, you don't need to recompile perl code each time, etc., and they take care of privileges outside your CGI code.