============= Writing Tests ============= As of glotter2 0.5.0, tests can be auto-generated by adding a `tests` and `use_tests` items to the project configuration in ``.glotter.yml``. For details, refer to these sections in the `global Glotter2 configuration`_ documentation: - :ref:`tests` - :ref:`use_tests` However, if your test cannot be auto-generated, then continue reading this document. Glotter2 uses `pytest`_ behind the scenes for running tests. If you are not familiar with `pytest`_, it may be helpful to learn the basics from their `documentation`_. .. _project fixture: .. _Creating a Project Fixture: .. _above: Creating a Project Fixture ========================== A project fixture follows the same basic idea as a `fixture in pytest`_. It is used to "provide a fixed baseline upon which tests can... execute." (from `pytest fixture documentation`_) Fixtures in `pytest`_ can also be parametrized (see `Parametrizing fixtures`_). In the case of a project fixture, the "fixed baseline" of the tests is a set of sources that implement a project specified by a `project key`_. All of this is handled automatically using the `project fixture`_ decorator provided by Glotter2. - Start by importing the decorator: ``from glotter import project_test``. - Then create a fixture function and decorate it with the decorator. - The decorator takes a `project key`_ as a parameter. See `Project Keys below`_ for more information. - The function can be named whatever you like, but I recommend naming it something related to the `project key`_. - The function must also take a parameter called ``request``. - This is due to the way `pytest`_` works with parametrizing fixtures. (See `pytest parametrize documentation`_ for more information.) The body of the function should be three commands: - ``request.param.build()`` - Build the source file. (See `Directory Level Configuration`_) - ``yield request.param`` - Provide the source to the test. - ``request.param.cleanup()`` - Cleanup after all tests have run. Altogether, this should look like the following: .. code-block:: python @project_fixture('my_project_key') def my_project_key(request): try: request.param.build() yield request.param finally: request.param.cleanup() .. note:: While you may have multiple tests for a given project key, only one fixture is required per project key. Writing a Project Test ====================== As per `pytest`_ standards, any functions named starting with ``test`` will considered tests. Start by creating such a function. The function must take a parameter with the same name as the ``project_fixture`` function you defined for the test (see `Creating a Project Fixture`_ above). Next decorate the function with the ``project_test`` decorator provided by Glotter2. Don't forget to import it from glotter. ``from glotter import project_test`` This decorator also takes `project key`_ as a parameter. This project key should must match the project key for the project fixture. See `Project Keys below`_ for more information. Any other decorators -- from pytest or otherwise -- can be added as needed after the ``project_test`` decorator. How to implement the body of the function is up to you. The parameter of the test function named after the project fixture will be of type ``source``. It has the following methods available. - ``build(params='')`` - build the source with optional parameters - ``run(params=None)`` - run the source with optional parameters - ``exec(command)`` - run a command inside of the container where the source exists - ``cleanup()`` - cleanup the container where the source exists In most cases only ``run()`` should be used in the test. ``build`` and ``cleanup`` are called by the project fixture as described `above`_. However, I can imagine a corner case where ``exec`` could be useful. Both ``run`` and ``exec`` return the standard output response from the container. In other words ``exec`` will return the response of the command as a string. ``run`` will return as a string the output of the source when run with the provided parameters (if necessary). This can be saved off and used for assertions. (See `pytest assertion documentation`_ for more information.) Putting this all together a sample test might looks something like the following: .. code-block:: python @project_test('my_project_key') def test_my_script(my_project_key): actual = my_project_key.run() assert actual.strip() == 'script was run' .. _project key: .. _Project Keys below: Project Keys ============ A project key is just a string that refers to a single project that can have multiple source files and/or tests. Project keys are defined in the `global Glotter2 configuration`_. In order for tests to run properly the project key used here must refer to a project key specified in the `global Glotter2 configuration`_. It is case sensitive. In order to make things easier and prevent confusing typos, I recommend saving these strings as constants somewhere in your project or using an enum with a ``key`` method as seen below: .. code-block:: python from enum import Enum, auto class ProjectKeys(Enum): Baklava = auto() BubbleSort = auto() EvenOdd = auto() FileIO = auto() Factorial = auto() @property def key(self): return self.name.lower() .. note:: For this example to work, the project keys in your `global Glotter2 configuration`_, must match the names of the enum values letter-for-letter. Example ======= If we bring this all together, here is an example of a set of tests for a factorial project. Let's suppose that the ``ProjectKey`` class is in ``test/__init__.py``. Here's how this would look: .. code-block:: python import pytest from glotter import project_test, project_fixture from test import ProjectType error_permutations = [ ( 'no input', None, 'Please enter an integer' ), ( 'invalid input: not a number', '"asdf"', 'Please enter an integer' ), ( 'invalid input: negative', '"-1"', 'Integer must be positive' ) ] working_permutations = [ ( 'sample input: zero', '"0"', '1' ), ( 'sample input: one', '1', '1' ), ( 'sample input: ten', '10', '3628800' ) ] @project_fixture(ProjectType.Factorial.key) def factorial(request): request.param.build() yield request.param request.param.cleanup() @project_test(ProjectType.Factorial.key) @pytest.mark.parametrize( 'cli_args, expected', [ pytest.param(cli_args, expected, id=description) for description, cli_args, expected in working_permutations ] ) def test_factorial(cli_args, expected, factorial): actual = factorial.run(params=cli_args) assert actual.strip() == expected @project_test(ProjectType.Factorial.key) @pytest.mark.parametrize( 'cli_args, expected', [ pytest.param(cli_args, expected, id=description) for description, cli_args, expected in error_permutations ] ) def test_factorial_errors(cli_args, expected, factorial): actual = factorial.run(params=cli_args) assert actual.strip() == expected .. _pytest: https://docs.pytest.org/en/latest/ .. _documentation: https://docs.pytest.org/en/latest/ .. _fixture in pytest: https://docs.pytest.org/en/latest/fixture.html .. _pytest fixture documentation: https://docs.pytest.org/en/latest/fixture.html .. _Parametrizing fixtures: https://docs.pytest.org/en/latest/how-to/fixtures.html#fixture-parametrize .. _pytest assertion documentation: http://doc.pytest.org/en/latest/assert.html .. _pytest parametrize documentation: https://docs.pytest.org/en/latest/how-to/fixtures.html#fixture-parametrize .. _Directory Level Configuration: directory-level-configuration.html#build .. _global Glotter2 configuration: global-glotter2-configuration.html#projects