2
votes

I have a python file that reads from a configuration file and initializes certain variables, followed by a number of test cases, defined by pytest markers.

I run different set of test cases parallelly by calling these markers, like this - pytest -m "markername" -n 3

The problem now is, I don't have a single configuration file anymore. There are multiple configuration files and I need a way to get from command line during execution, which configuration file to use for the test cases.

What I tried?

I wrapped the reading of config file into a function with a conf argument.

I added a conftest.py file, added a command-line option conf using pytest addoption.

def pytest_addoption(parser):
    parser.addoption("--conf", action="append", default=[],
        help="Name of the configuration file to pass to test functions")

def pytest_generate_tests(metafunc):
    if 'conf' in metafunc.fixturenames:
        metafunc.parametrize("conf", metafunc.config.option.conf)
        

and then tried this pytest -q --conf="configABC" -m "markername", in the hope that I can read that configuration file to initialize certain parameters and pass it on to the test cases containing the given marker. But nothing ever happens, and I wonder... I wonder how... I wonder why..

If I run pytest -q --conf="configABC", the config file gets read, but all the test cases are running.

However, I only need to run specific test cases that use variables initialized through the config file I get from command line. And I want to use markers because I'm also using parameterization and running them in parallel. How will I get which configuration file to use, from the command line? Am I messing this up?

Edit 1:

#contents of testcases.py

import json
import pytest

...
...
...

def getconfig(conf):
    config = open(str(conf)+'_Configuration.json', 'r')
    data = config.read()
    data_obj = json.loads(data)
    globals()['ID'] = data_obj['Id']
    globals()['Codes'] = data_obj['Codes']          # list [Code_1, Code_2, Code_3]
    globals()['Uname'] = data_obj['IM_User']
    globals()['Pwd'] = data_obj['IM_Password']
    #return ID, Codes, User, Pwd

def test_parms():
    #Returns a list of tuples [(ID, Code_1, Uname, Pwd), (ID, Code_2, Uname, Pwd), (ID, Code_3, Uname, Pwd)]
    ...
    ...
    return l

@pytest.mark.testA
@pytest.mark.parametrize("ID, Code, Uname, Pwd", test_parms())
def testA(ID, Code, Uname, Pwd):
    ....
    do something
    ....

@pytest.mark.testB
@pytest.mark.parametrize("ID, Code, Uname, Pwd", test_parms())
def testB(ID, Code, Uname, Pwd):
    ....
    do something else
    ....
1
Can you please add a couple of simple tests and the expected output in your question? I'm still not completely sure what you want to achieve, and what actually happens. By "nothing happens", do you mean no tests are executed? Do you want to dynamically define the tests to run, or are the contents of the config file to be used to parametrize the tests, or something else?MrBean Bremen
I'm trying to get which configuration file to use, from the command line, and use the config file to initialize certain variables for parameterization and use them to run tests containing a specific marker.loney61411

1 Answers

1
votes

You seem to be on the right track, but miss some connections and details.

First, your option looks a bit strange - as far as I understand, you just need a string instead of a list:

conftest.py

def pytest_addoption(parser):
    parser.addoption("--conf", action="store",
                     help="Name of the configuration file"
                          " to pass to test functions")

In your test code, you read the config file, and based on your code, it contains a json dictionary of parameter lists, e.g. something like:

{
  "Id": [1, 2, 3],
  "Codes": ["a", "b", "c"],
  "IM_User": ["User1", "User2", "User3"],
  "IM_Password": ["Pwd1", "Pwd2", "Pwd3"]
}

What you need for parametrization is a list of parameter tuples, and you also want to read the list only once. Here is an example implementation that reads the list on first access and stores it in a dictionary (provided your config file looks like shown above):

import json

configs = {}

def getconfig(conf):
    if conf not in configs:
        # read the configuration if not read yet
        with open(conf + '_Configuration.json') as f:
            data_obj = json.load(f)
        ids = data_obj['Id']
        codes = data_obj['Codes']
        users = data_obj['IM_User']
        passwords = data_obj['IM_Password']
        # assume that all lists have the same length
        config = list(zip(ids, codes, users, passwords))
        configs[conf] = config
    return configs[conf] 

Now you can use these parameters to parametrize your tests:

def pytest_generate_tests(metafunc):
    conf = metafunc.config.getoption("--conf")
    # only parametrize tests with the correct parameters
    if conf and metafunc.fixturenames == ["uid", "code", "name", "pwd"]:
        metafunc.parametrize("uid, code, name, pwd", getconfig(conf))

@pytest.mark.testA
def test_a(uid, code, name, pwd):
    print(uid, code, name, pwd)


@pytest.mark.testB
def test_b(uid, code, name, pwd):
    print(uid, code, name, pwd)

def test_c():
    pass

In this example, both test_a and test_b will be parametrized, but not test_c.

If you now run the test (with the json file name "ConfigA_Configuration.json"), you get something like:

$ python -m pytest -v --conf=ConfigA -m testB testcases.py

============================================ 6 passed, 2 warnings in 0.11s ============================================

(Py37_new) c:\dev\so\questions\so\params_from_config>python -m pytest -v --conf=ConfigA -m testB test_params_from_config.py

...
collected 7 items / 4 deselected / 3 selected

test_params_from_config.py::test_b[1-a-User1-Pwd1] PASSED
test_params_from_config.py::test_b[2-b-User2-Pwd2] PASSED
test_params_from_config.py::test_b[3-c-User3-Pwd3] PASSED