[Matplotlib-devel] Why keep codecov?

Next Topic
 
classic Classic list List threaded Threaded
7 messages Options
Reply | Threaded
Open this post in threaded view
|

[Matplotlib-devel] Why keep codecov?

Eric Firing
https://codecov.io/gh/matplotlib/matplotlib/compare/5455bab77e75cee46d260c0f27110d6b4fa78270...3eb471dadd2fd397c840bc98db90a06845697131/changes

The link above is what I find for the "failing" codecov test for
https://github.com/matplotlib/matplotlib/pull/17266, which is passing
all the other tests.

The codecov.io output looks like complete garbage, listing a huge number
of untouched files.

I added 2 tests for the code I modified.  As far as I can see, from a
practical standpoint, codecov is incorrectly flagging the PR as failing,
leading to a misleading red X in the PR list.

I can see how a mechanized code coverage test could be useful for a
project at an early stage, when it is building up its test
infrastructure, and at a later stage, to be run occasionally, especially
when major code additions are made.

As it is being run now, however, I think it is impeding, not aiding, the
development process.

I recommend that we remove it from the suite of tests run for every PR,
or at least keep it out of the accounting for the overall "pass/fail"
rating of the PR.

Eric
_______________________________________________
Matplotlib-devel mailing list
[hidden email]
https://mail.python.org/mailman/listinfo/matplotlib-devel
Reply | Threaded
Open this post in threaded view
|

Re: Why keep codecov?

Jody Klymak
Hi Eric,

I agree for the codecov/project check.  I don’t see how it helps, and I routinely ignore it during reviews because it is pretty obscure how you are supposed to fix it.    

In my opinion, codecov/patch is pretty helpful to know if someone’s new code actually ever gets exercised.  

Cheers,   Jody

> On May 5, 2020, at  18:55 PM, Eric Firing <[hidden email]> wrote:
>
> https://codecov.io/gh/matplotlib/matplotlib/compare/5455bab77e75cee46d260c0f27110d6b4fa78270...3eb471dadd2fd397c840bc98db90a06845697131/changes
>
> The link above is what I find for the "failing" codecov test for https://github.com/matplotlib/matplotlib/pull/17266, which is passing all the other tests.
>
> The codecov.io output looks like complete garbage, listing a huge number of untouched files.
>
> I added 2 tests for the code I modified.  As far as I can see, from a practical standpoint, codecov is incorrectly flagging the PR as failing, leading to a misleading red X in the PR list.
>
> I can see how a mechanized code coverage test could be useful for a project at an early stage, when it is building up its test infrastructure, and at a later stage, to be run occasionally, especially when major code additions are made.
>
> As it is being run now, however, I think it is impeding, not aiding, the development process.
>
> I recommend that we remove it from the suite of tests run for every PR, or at least keep it out of the accounting for the overall "pass/fail" rating of the PR.
>
> Eric
> _______________________________________________
> Matplotlib-devel mailing list
> [hidden email]
> https://mail.python.org/mailman/listinfo/matplotlib-devel

_______________________________________________
Matplotlib-devel mailing list
[hidden email]
https://mail.python.org/mailman/listinfo/matplotlib-devel
Reply | Threaded
Open this post in threaded view
|

Re: Why keep codecov?

tcaswell
I think the current problem with codecov is that on the master branch azure uploads coverage reports but it does not an the PR runs which leads to the drop in coverage due to the edge cases that are hit slightly differently on azure than on travis / appveyor.

Tom

On Tue, May 5, 2020 at 11:21 PM Jody Klymak <[hidden email]> wrote:
Hi Eric,

I agree for the codecov/project check.  I don’t see how it helps, and I routinely ignore it during reviews because it is pretty obscure how you are supposed to fix it.   

In my opinion, codecov/patch is pretty helpful to know if someone’s new code actually ever gets exercised. 

Cheers,   Jody

> On May 5, 2020, at  18:55 PM, Eric Firing <[hidden email]> wrote:
>
> https://codecov.io/gh/matplotlib/matplotlib/compare/5455bab77e75cee46d260c0f27110d6b4fa78270...3eb471dadd2fd397c840bc98db90a06845697131/changes
>
> The link above is what I find for the "failing" codecov test for https://github.com/matplotlib/matplotlib/pull/17266, which is passing all the other tests.
>
> The codecov.io output looks like complete garbage, listing a huge number of untouched files.
>
> I added 2 tests for the code I modified.  As far as I can see, from a practical standpoint, codecov is incorrectly flagging the PR as failing, leading to a misleading red X in the PR list.
>
> I can see how a mechanized code coverage test could be useful for a project at an early stage, when it is building up its test infrastructure, and at a later stage, to be run occasionally, especially when major code additions are made.
>
> As it is being run now, however, I think it is impeding, not aiding, the development process.
>
> I recommend that we remove it from the suite of tests run for every PR, or at least keep it out of the accounting for the overall "pass/fail" rating of the PR.
>
> Eric
> _______________________________________________
> Matplotlib-devel mailing list
> [hidden email]
> https://mail.python.org/mailman/listinfo/matplotlib-devel

_______________________________________________
Matplotlib-devel mailing list
[hidden email]
https://mail.python.org/mailman/listinfo/matplotlib-devel


--
Thomas Caswell
[hidden email]

_______________________________________________
Matplotlib-devel mailing list
[hidden email]
https://mail.python.org/mailman/listinfo/matplotlib-devel
Reply | Threaded
Open this post in threaded view
|

Re: Why keep codecov?

Eric Firing
Tom,

I'm sorry but I don't understand.  Let me put it another way: how often
does the codecov flagging and output lead to the discovery and solution
of a real problem?  At present, it is clearly an annoyance that incurs a
real penalty by putting false information into the PR listings; if that
isn't balanced by some substantial benefit, then we should drop it.
What am I missing?

Eric

On 2020/05/05 5:24 PM, Thomas Caswell wrote:

> I think the current problem with codecov is that on the master branch
> azure uploads coverage reports but it does not an the PR runs which
> leads to the drop in coverage due to the edge cases that are hit
> slightly differently on azure than on travis / appveyor.
>
> Tom
>
> On Tue, May 5, 2020 at 11:21 PM Jody Klymak <[hidden email]
> <mailto:[hidden email]>> wrote:
>
>     Hi Eric,
>
>     I agree for the codecov/project check.  I don’t see how it helps,
>     and I routinely ignore it during reviews because it is pretty
>     obscure how you are supposed to fix it.
>
>     In my opinion, codecov/patch is pretty helpful to know if someone’s
>     new code actually ever gets exercised.
>
>     Cheers,   Jody
>
>      > On May 5, 2020, at  18:55 PM, Eric Firing <[hidden email]
>     <mailto:[hidden email]>> wrote:
>      >
>      >
>     https://codecov.io/gh/matplotlib/matplotlib/compare/5455bab77e75cee46d260c0f27110d6b4fa78270...3eb471dadd2fd397c840bc98db90a06845697131/changes
>      >
>      > The link above is what I find for the "failing" codecov test for
>     https://github.com/matplotlib/matplotlib/pull/17266, which is
>     passing all the other tests.
>      >
>      > The codecov.io <http://codecov.io> output looks like complete
>     garbage, listing a huge number of untouched files.
>      >
>      > I added 2 tests for the code I modified.  As far as I can see,
>     from a practical standpoint, codecov is incorrectly flagging the PR
>     as failing, leading to a misleading red X in the PR list.
>      >
>      > I can see how a mechanized code coverage test could be useful for
>     a project at an early stage, when it is building up its test
>     infrastructure, and at a later stage, to be run occasionally,
>     especially when major code additions are made.
>      >
>      > As it is being run now, however, I think it is impeding, not
>     aiding, the development process.
>      >
>      > I recommend that we remove it from the suite of tests run for
>     every PR, or at least keep it out of the accounting for the overall
>     "pass/fail" rating of the PR.
>      >
>      > Eric
>      > _______________________________________________
>      > Matplotlib-devel mailing list
>      > [hidden email] <mailto:[hidden email]>
>      > https://mail.python.org/mailman/listinfo/matplotlib-devel
>
>     _______________________________________________
>     Matplotlib-devel mailing list
>     [hidden email] <mailto:[hidden email]>
>     https://mail.python.org/mailman/listinfo/matplotlib-devel
>
>
>
> --
> Thomas Caswell
> [hidden email] <mailto:[hidden email]>

_______________________________________________
Matplotlib-devel mailing list
[hidden email]
https://mail.python.org/mailman/listinfo/matplotlib-devel
Reply | Threaded
Open this post in threaded view
|

Re: Why keep codecov?

tcaswell
My point is that the issue (the coverage on the tests always being reported as going down in unrelated files) is due to a miss-configuration (azure not uploading reports on PRs) rather than a fundamental problem.

I think "is the check useful in principle" should be considered separately from "is it currently configured correctly".

Tom

On Wed, May 6, 2020 at 2:37 AM Eric Firing <[hidden email]> wrote:
Tom,

I'm sorry but I don't understand.  Let me put it another way: how often
does the codecov flagging and output lead to the discovery and solution
of a real problem?  At present, it is clearly an annoyance that incurs a
real penalty by putting false information into the PR listings; if that
isn't balanced by some substantial benefit, then we should drop it.
What am I missing?

Eric

On 2020/05/05 5:24 PM, Thomas Caswell wrote:
> I think the current problem with codecov is that on the master branch
> azure uploads coverage reports but it does not an the PR runs which
> leads to the drop in coverage due to the edge cases that are hit
> slightly differently on azure than on travis / appveyor.
>
> Tom
>
> On Tue, May 5, 2020 at 11:21 PM Jody Klymak <[hidden email]
> <mailto:[hidden email]>> wrote:
>
>     Hi Eric,
>
>     I agree for the codecov/project check.  I don’t see how it helps,
>     and I routinely ignore it during reviews because it is pretty
>     obscure how you are supposed to fix it.
>
>     In my opinion, codecov/patch is pretty helpful to know if someone’s
>     new code actually ever gets exercised.
>
>     Cheers,   Jody
>
>      > On May 5, 2020, at  18:55 PM, Eric Firing <[hidden email]
>     <mailto:[hidden email]>> wrote:
>      >
>      >
>     https://codecov.io/gh/matplotlib/matplotlib/compare/5455bab77e75cee46d260c0f27110d6b4fa78270...3eb471dadd2fd397c840bc98db90a06845697131/changes
>      >
>      > The link above is what I find for the "failing" codecov test for
>     https://github.com/matplotlib/matplotlib/pull/17266, which is
>     passing all the other tests.
>      >
>      > The codecov.io <http://codecov.io> output looks like complete
>     garbage, listing a huge number of untouched files.
>      >
>      > I added 2 tests for the code I modified.  As far as I can see,
>     from a practical standpoint, codecov is incorrectly flagging the PR
>     as failing, leading to a misleading red X in the PR list.
>      >
>      > I can see how a mechanized code coverage test could be useful for
>     a project at an early stage, when it is building up its test
>     infrastructure, and at a later stage, to be run occasionally,
>     especially when major code additions are made.
>      >
>      > As it is being run now, however, I think it is impeding, not
>     aiding, the development process.
>      >
>      > I recommend that we remove it from the suite of tests run for
>     every PR, or at least keep it out of the accounting for the overall
>     "pass/fail" rating of the PR.
>      >
>      > Eric
>      > _______________________________________________
>      > Matplotlib-devel mailing list
>      > [hidden email] <mailto:[hidden email]>
>      > https://mail.python.org/mailman/listinfo/matplotlib-devel
>
>     _______________________________________________
>     Matplotlib-devel mailing list
>     [hidden email] <mailto:[hidden email]>
>     https://mail.python.org/mailman/listinfo/matplotlib-devel
>
>
>
> --
> Thomas Caswell
> [hidden email] <mailto:[hidden email]>



--
Thomas Caswell
[hidden email]

_______________________________________________
Matplotlib-devel mailing list
[hidden email]
https://mail.python.org/mailman/listinfo/matplotlib-devel
Reply | Threaded
Open this post in threaded view
|

Re: Why keep codecov?

Eric Firing
On 2020/05/06 5:10 AM, Thomas Caswell wrote:
> I think "is the check useful in principle" should be
> considered separately from "is it currently configured correctly".

OK, you have identified how it is misconfigured (I guess).  Now we can
separately consider my question, slightly modified, because what we
really care about is reality, not an abstract principle: Is the check
useful in practice? Has it in the past, and is it likely in the future,
to identify a real problem that we fix, to the benefit of the project?

As Jody noted, the codecov/patch seems to be doing something useful,
codecov/project does not.

Eric
_______________________________________________
Matplotlib-devel mailing list
[hidden email]
https://mail.python.org/mailman/listinfo/matplotlib-devel
Reply | Threaded
Open this post in threaded view
|

Re: Why keep codecov?

Ryan May-3
So the benefits of the project coverage:

* Catching code paths that no longer run due to dependency changes
* Seeing how the PR impacts overall coverage (80% coverage of a large PR is different than 80% of a small one)
* Seeing code coverage changes due to other changes in CI configuration

So while they're not relevant to 90% of PRs, they're still something that should be checked. I'd argue all of that is great benefit that we can get essentially for free by having the bots do this for us.

The problem is that on the matplotlib project there's no trust in the codecov setup, and as a result barely anyone pays attention to them. IMO, that's the problem we need to solve. I agree with Tom that this is a configuration problem for matplotlib. This is not a problem, for example, on MetPy where our codecov setup works perfectly fine. Any occasional inconsistency in results or supposed red herrings usually trace back to problems with our config (codecov's questionable UX aside).

Ryan

On Wed, May 6, 2020 at 11:02 AM Eric Firing <[hidden email]> wrote:
On 2020/05/06 5:10 AM, Thomas Caswell wrote:
> I think "is the check useful in principle" should be
> considered separately from "is it currently configured correctly".

OK, you have identified how it is misconfigured (I guess).  Now we can
separately consider my question, slightly modified, because what we
really care about is reality, not an abstract principle: Is the check
useful in practice? Has it in the past, and is it likely in the future,
to identify a real problem that we fix, to the benefit of the project?

As Jody noted, the codecov/patch seems to be doing something useful,
codecov/project does not.

Eric
_______________________________________________
Matplotlib-devel mailing list
[hidden email]
https://mail.python.org/mailman/listinfo/matplotlib-devel


--
Ryan May


_______________________________________________
Matplotlib-devel mailing list
[hidden email]
https://mail.python.org/mailman/listinfo/matplotlib-devel