Overblog
Editer l'article Suivre ce blog Administration + Créer mon blog
abaonwan.over-blog.com

Testing, Waterfall, Agile, Mindset, Mainframe... - My thoughts

Our path to reach better quality

2010 - 2017, what have we done?

2010 - 2017, what have we done?

Level 1

In 2010, when I started my "focus on testing" career, I was more involved in "Integration & Assembly Testing" what we can call "technical testing", you don't really care on functionalities you mainly ensure that when you launch something in the chain, it goes till the end without breaking the system.

We were not just validating this kind of stuffs, we were also going into the databases to check data.

This test phase was lasting one month time.

In the "Functional" part of the team, they were launching test cases and ensuring that
- fields in the databases were correctly filled in
- fields on the GUI were OK

for some dataflows (between 50 and 100 for system who has in fact more than 10.000 possible dataflows at this time). 

Not all flows because they had not enough time. The Functional testing phase was "only" lasting 1 month before giving the hand to the "User Acceptance Tests" for 3 months.

Our 2 phases included some "Non-Regression Testing" which were also requesting tons of effort.

As you can imagine, as the coverage of our tests was so low, the quality of the application given to the "User Acceptance Tests" was not so good.

Level 2

After 6 months in the team, I decided to get the ISTQB Foundation certification.

By preparing it, I learn more about my new jobs and I started learning about coverages and the different test design techniques.

When my test manager assigned me as "test coordinator" for a new part of the application, I changed the "Integration & Assembly" approach for this one.

This application was changed less often and was more stable but, nevertheless the global approach for the testing was the same.

I started making a connectivity map between main programs & modules and I indicated all the possible paths on it.

This gave us a clear view on "how it works" now and what we should test for regression purpose.

We created the test flows package for this and it worked fine, it was taking some weeks to have the full package done but the stability of our delivery to the next testing team was better.

Level 3

One and an half year later, doing "integration testing" like a robot on the first application (I was not the original author of the first integrations for the applications, I just followed the same recipe afterwards), I decided to speak to my test manager because our tests were not relevant enough, taking too much time and not efficient.

She allowed me to start refactoring the "integration & assembly testing" phase.

I firstly discussed with the testers and they all told me the same thing "we are not doing the right stuff".

They never complained, they were always doing the things as in the past because it was like this.

So, we put in place "innovative" approach. Changing our validations approach to focus on stability and remove all validations which were considered as more "functional".

They were happy, and I also promised them a "faster non regression package".

It's my first step in automation world... I did not do real automation, I did what I call "semi-automation". Instead of validating flows, one by one I gathered all the data at certain key moment in the days and the validation were "compare the gathering to the expected".

From 20 man-days of execution, we felt down to 5 calendar days of execution (and really 2 man-days of work) with the same quality.

Of course, everybody was happy (managers, teammates, me).

And... after all this effort, my customer decided to give this test phase to the development team which refused to use the "semi-automated" stuffs because they did not understand it...  (back to Level 1 for them).

Level 4

We joined the functional testing team and what I discovered was sometimes awful.

We are testing on mainframe, we have to go to the databases, files to check if everything runs fine.

Only one person in the team had technical knowledge, the others were saying "I'm functional, I don't care about technical stuff and testing mainframe sucks, so I only use the GUI..."

The validations in HP QC were sometimes just a link to the Functional Design, I was shocked.

And I suddenly understood why the quality was so low as output.

We changed of test managers and he came with another approach.

We had to improve our test coverage, in HP QC our validations had to be understood by "someone picked up in the street".

To improve our coverage, the only way we found was "going deeper into the files, databases and validating each field one by one".

So, taking more time, using more people to do checks and to deliver something more stable.

We also took more time to write real validation "how to", "expected results".

It worked better, the "user acceptance testers" were happier with what we gave to them.

But it was not enough.

Level 5

"Testing costs too much so..."

the management decided to merge the "functional testing team" and the "user acceptance testing team".

In a sense they were right, we all were doing the same stuffs, the same validations (fields contents and so on).

They were just more than us (we were 6 they were nearly 20) and had 3 months to test so, they were able to test more and to detect more bugs.

They were not using HP QC, they had plenty of test matrixes mainly non understandable by newcomers.

So... when we merged we moved everything to HP QC, writing real validations and continue working the same way.

3 months of preparation for 3 months of execution so... 2 major deployments per year.

If you remember, on level 1, I was speaking about "10.000 possible dataflows", surprise... the application continued to improve, onboard new customers and now it's "40.000 possible dataflows" so yes... we need to test more and more.

Nevertheless... Quality of the delivery in Customer Environment improved and everybody is happy except the management because

- "testing continues to cost too much..."
- "we need to improve our time to market..."
- "if we delay a deployment in Customer Environment, it's the fault of the testing they are not able to respect their planning"  (yes of course, it's not because the quality of the delivery sucks?)

So... "AUTOMATE"

Level 6?

Automation... Fancy word...

But wait...

Are you sure we work the right way?

This will continue in article "Are we really doing functional testing?"

 

Abaonwan

 

Partager cet article
Repost0
Pour être informé des derniers articles, inscrivez vous :
Commenter cet article