Can We Trust Tests To Automate Dependency Updates? A Case Study of Java Projects

09/24/2021
by   Joseph Hejderup, et al.
0

Developers are increasingly using services such as Dependabot to automate dependency updates. However, recent research has shown that developers perceive such services as unreliable, as they heavily rely on test coverage to detect conflicts in updates. To understand the prevalence of tests exercising dependencies, we calculate the test coverage of direct and indirect uses of dependencies in 521 well-tested Java projects. We find that tests only cover 58 artificial updates with simple faults covering all dependency usages in 262 projects, we measure the effectiveness of test suites in detecting semantic faults in dependencies; we find that tests can only detect 47 35 investigate the use of change impact analysis as a means of reducing false negatives; on average, our tool can uncover 74 dependencies and 64 test suites. We then apply our tool in 22 real-world dependency updates, where it identifies three semantically conflicting cases and five cases of unused dependencies. Our findings indicate that the combination of static and dynamic analysis should be a requirement for future dependency updating systems.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset