I have a few related but separate Python scripts that both make use of two internal modules that use logging.
The first script works fine using the root logger, and captures logging statements from the two modules. With the second script, however, I want to have a main log, but when it iterates over a list of servers, to send the logs to a per-machine log file, while suspending logging to the main log file and console. I have a hacky solution at the moment, which I'll show below.
import logging
DEFAULT_LOG_FORMAT = "%(asctime)s [%(levelname)s]: %(message)s"
DEFAULT_LOG_LEVEL = logging.INFO
def get_log_file_handler(filename, level=None, log_format=None):
file_handler = logging.FileHandler(filename=filename, encoding="utf-8", mode="w")
file_handler.setLevel(level or DEFAULT_LOG_LEVEL)
file_handler.setFormatter(logging.Formatter(log_format or DEFAULT_LOG_FORMAT))
return file_handler
def process(server):
server_file_handler = get_log_file_handler("%s.log" % server.name)
root_logger = logging.getLogger()
# This works, but is hacky
main_handlers = list(root_logger.handlers) # copy list of root log handlers
root_logger.handlers = [] # empty the list on the root logger
root_logger.addHandler(server_file_handler)
try:
# do some stuff with the server
logging.info("This should show up only in the server-specific log file.")
finally:
root_logger.removeHandler(server_file_handler)
# Add handlers back in
for handler in main_handlers:
root_logger.addHandler(handler)
def main():
logging.basicConfig(level=DEFAULT_LOG_LEVEL)
logging.getLogger().addHandler(get_log_file_handler("main.log"))
servers = [] # retrieved from another function, just here for iteration
logging.info("This should show up in the console and main.log.")
for server in servers:
process(server)
logging.info("This should show up in the console and main.log again.")
if __name__ == "__main__":
main()
I'm looking for a less-hacky way to do this. I realize that just calling logging.info() and similar is a problem, and have changed the code in the two modules to use:
logger = logging.getLogger("moduleA")
and
logger = logging.getLogger("moduleB")
So the main script, be it scriptA.py or scriptB.py, using the root logger, will get the events from those two modules propagated and logged to main.log. A few other solutions I've tried are using a Filter on all the existing handlers that'd ignore everything from "moduleA" and "moduleB".
My next thought is to create a new named logger for the individual servers with the server_file_handler as the sole handler for them, and add that as a handler for the two module loggers as well, and remove those handlers at the end of process(). Then I could set the root logger's level to WARNING, so all INFO/DEBUG statements from the two modules would only go to the server-specific logger.
I can't exactly use hierarchical logger naming, unless that supported wildcards somehow, since I'd wind up with:
logging.getLogger("org.company") # main logger for script
logging.getLogger("org.company.serverA")
logging.getLogger("org.company.serverB")
logging.getLogger("org.company.moduleA")
logging.getLogger("org.company.moduleB")
Logging from the two modules would only propagate up to the main logger, but not the two server logs.
It's basically a they-expect-a-tree, I-need-a-graph problem. Has anyone done anything like this before, and what's the most Pythonic way to do it?