Skip to content

Feature/unit test missing assert #629

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 4 commits into
base: master
Choose a base branch
from

Conversation

gardian12
Copy link

Hello,
I've added a new check, which raises a finding if no assertion at a testing method was found.
At least a static method call of CL_ABAP_UNIT_ASSERT should be added to each testing method to achieve a valuable unit test.

@gardian12 gardian12 requested a review from a team as a code owner July 25, 2025 08:04
Copy link

cla-assistant bot commented Jul 25, 2025

CLA assistant check
All committers have signed the CLA.

removed comments

Co-authored-by: abaplint[bot] <24845621+abaplint[bot]@users.noreply.github.com>
@gardian12
Copy link
Author

Hello @bjoern-jueliger-sap I know that this repository is a little bit outdated due to the ABAP cloud checks are in place. But beside this I'm sure lots of companies and users are still using this up to now. Therefore, I would be highly appreciate if you would be so kind to approve my pull request with a now check.
If my environment is set up for cloud I would add the same there.
What do you think?

@bjoern-jueliger-sap
Copy link
Member

Don't worry, contributions to this repository are still welcome (although of course we don't want to add new checks here and then never add it to the Cloud version)!

Before we get into the code itself, just to clarify:

This check is supposed to enforce the rule about not adding tests without asserts to improve the coverage, right?

If so, I think we should perhaps be a bit more clear about its goal, since the current assertion in the documentation - "A unit test that lacks any assertions or validation of expected behavior is fundamentally meaningless." - is not actually true: You can write a unit test that is supposed to test that a certain unit doesn't crash:

method test_no_exception.
  data(cut) = new class_supposed_to_be_robust( ).
  cut->do( garbage_input ).
endmethod.

And this would test that cut->do does not crash when given garbage_input (by throwing either runtime errors or exceptions). Of course one can argue that one should actually test the full behaviour of the code (i.e. the return value or whatever side effects it is supposed to have in this case), but it's not meaningless (and some people do this also e.g. because they don't care about what happens with garbage input, just that it doesn't crash).

I don't think a static check can ever really detect these kinds of tests, but it shouldn't claim that they're useless - just that it's better to add an explicit assert (this also falls into making tests readable). However, users will probably want a pseudo comment for these cases, anyway.

@gardian12
Copy link
Author

Before we get into the code itself, just to clarify:

This check is supposed to enforce the rule about not adding tests without asserts to improve the coverage, right?

Thanks @bjoern-jueliger-sap for your quick answer.
Yes, you are right. The main purpose is not to deal with coverage but with "what I do expect and what not" behavior of the implementation.

Regarding your example, there would be several scenarios in my mind.
If cut->do( garbage_input ). will change any kind of instance variable or reference, an assert is easy to be implemented and highly welcome.

If cut->do( garbage_input ). will change any kind of DB entry or so, an assert is able to check this also.

If cut->do( garbage_input ). will change nothing because of forcing an UI display, it should be not part of the testing code.

But nevertheless, I'm a friend of generic solution predict the unexpected. Therefore, a pragma option should be in place.
Initially I had it implemented, but it seems to be, there is a leak in the syntax parser, not expecting a pragma at testing code.
Maybe you can advise me, how to get it run.

REPORT y_example.
  CLASS y_example_class DEFINITION FOR TESTING.
    PUBLIC SECTION.
    PROTECTED SECTION.

      METHODS test FOR TESTING.  "#EC MISS_UT_ASSERT
  ENDCLASS.

  CLASS y_example_class IMPLEMENTATION.
    METHOD test.
      RETURN.
    ENDMETHOD.
  ENDCLASS.

Something like that.
The test code is already prepared. Only the pragma option is disabled at the meta description.

@bjoern-jueliger-sap
Copy link
Member

I'll have a look at it. The parser should not delete pseudo comments in test code any more than it does in real code, but the problem might be the placement of the comment in relation to the finding

@bjoern-jueliger-sap bjoern-jueliger-sap self-assigned this Aug 5, 2025
The CodePal always looks for the pseudo comment at the position of the finding. Since we position the finding on the first line of the method implementation, the unit test also needs to position it at the first line, then it works.
@bjoern-jueliger-sap
Copy link
Member

I've pushed a small change - the pseudo comment works now, you were just looking for it at the wrong position.

@bjoern-jueliger-sap
Copy link
Member

I've also run into a different case where the check reports a false positive that's a bit more problematic than the artificial case above since if one follows the write custom asserts rule, every test method one writes will get a finding from this check.

The check currently only looks for the assert in the test method itself, but not in any called methods, so e.g. this code

class actual_tests definition for testing
    risk level harmless
    duration short
    inheriting from tests.
  PRIVATE SECTION.

    METHODS custom_assert FOR TESTING RAISING cx_static_check.
    METHODS assert_tbl_eq_ignoring_order
      IMPORTING
        value(exp_tab) type standard table
        value(act_tab) type standard table.
endclass.

CLASS actual_tests IMPLEMENTATION.

  METHOD custom_assert.
    types ty_tab type standard table of i with empty key.
    data(my_expected_table) = value ty_tab( ( 1 ) ( 2 ) ).
    data(my_actual_table) = value ty_tab( ( 2 ) ( 1 ) ( 3 ) ).

    assert_tbl_eq_ignoring_order(
      exp_tab = my_expected_table
      act_tab = my_actual_table ).
  ENDMETHOD.

  METHOD assert_tbl_eq_ignoring_order.
    loop at exp_tab assigning field-symbol(<exp_tab_line>).
      data(exp_idx) = sy-tabix.
      data(act_idx) = line_index( act_tab[ table_line = <exp_tab_line> ] ).
      if act_idx > 0.
        delete act_tab index act_idx.
        delete exp_tab.
        continue.
      else.
        cl_abap_unit_assert=>fail(
          msg = |Line index { exp_idx } of expected table not in actual table.| ).
      endif.
    endloop.
    if lines( act_tab ) > 0.
      cl_abap_unit_assert=>fail(
        msg = |{ lines( act_tab ) } lines of actual table not in expected table.| ).
    endif.
  ENDMETHOD.

ENDCLASS.

gets a finding in custom_assert and would get another finding in every other method that reuses the custom assert method that compares tables without order.

In the more modern cloud infrastructure it would be comparatively simple to analyze also the called methods for the assert (though one has to be careful with avoiding recursion etc. when doing that), but in this older SCAN-based infrastructure that's a bit of a pain.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants