Thoughts on being a Good Test Software Engineer

I’ve lost count of how many projects I’ve worked on over the past 15 years – most have been good, some have been bad, then there’s the odd one that makes you truly want to turn your back on any kind of software development and go and apply for that oh so stress free looking shelf stacking job at Sainsbury’s!

Most of them have been in a test role in one way or another, which is kind of funny as for a long time I was desperate to make my way into another sphere – ideally development in Java or C# both of which I’ve done a reasonable amount of, but have always been pulled back to test – and in particular Aerospace test which should be considered the context for this blog.

More recently though, I have come to appreciate more and more that I actually love being a Test Software Engineer; it’s not like you can pull out your mobile phone and show off an app or website that you have worked on to your peers or friends, but what you get is maybe the even bigger satisfaction of being able to say that although you may have been a very small cog, you have helped to put a physical satellite in orbit (the buzz of seeing a rocket launch a satellite you worked on is unbelievable!), or a plane in the air that works perfectly which somewhere in the chain you contributed to.

To put it simply, being a Test Software Engineer can be incredibly fulfilling, and is not as I was once told by a rather aloof colleague, a career choice for failed software engineers.

It can be every bit as challenging, interesting, and rewarding as doing say pure application development in C++, Java or C# all of which I also have a reasonable amount of experience in: And best of all, if you’re a LabVIEW FanBOY like myself, there really isn’t (again this is only speaking from my personal experience so feel free to disagree) a better language to develop Aerospace test software in.

This is not just because of the speed advantages of developing in LabVIEW, but because these speed advantages are also compounded by the massive amount of third party and mostly free driver code which means that for most modern instruments from the likes of Keysight, Rohde and Schwarz and Tektronix, their prime driver support is now for LabVIEW. Add TestStand into the mix, and you will struggle to find a more effective test software development tool chain.

Digressing back to the title of this blog, if you choose (or fall!) in to being a Test Software Engineer then here is a few tips for being successful and things that might help you get on and I certainly feel have helped me become a half decent Test Software Engineer:-

Expect and plan for Change

I’ve worked on a couple of projects where there has been a real siege mentality and huge resistance with regard to requirement changes, and it has been a massive point of conflict within the project team.

From a Test Engineers point of view the main conflicts in my experience have come when a customer doesn’t fully appreciate that even a minor requirement change can entail and awful lot of work if you have a full audit trail of specifications, validation procedures etc, regression testing and release actions going on as is often the case in Aerospace.

Certainly trying to explain to a customer why changing say a single limit change causes hours of work when it only takes seconds to actually change the value in the code can be a difficult sell, but due to the incredibly tightly config controlled aerospace world, this is often the reality.

It is a situation which in many cases you will not be able to get away from, in any sort of concurrent development environment where you are developing test software alongside the product requirements changes are going to happen and being overly stubborn about this is never going to be very productive or put you in a good light.

You can though help yourself immensely by planning, expecting, and even predicting where changes will occur so that when the inevitable requirement changes come through you are in the best possible position to react outside of any organisation level change machinations.

In practice from a LabVIEW perspective this means keeping your software nicely modular and loosely coupled as is a general rule of software development, and externalising variables with the ideal situation being that if a change comes along then it may be a case of simply updating a spreadsheet or csv file with no recompilation of the code required.

Be prepared to dig your heels in over requirement changes at times

Yes – I have completely contradicted myself! But there is a subtle difference between what I said in the last section and this section.

Whilst it is always good to plan for change and being flexible, being a complete push over to a customer with regards to requirement changes can also make your life very difficult, and this is especially the case if the requirements are drip fed. I’ve been in this situation before, and you can end up chasing your own tail where before you’ve got one change closed off another one comes along and your version numbering starts resembling the speedo on the space shuttle at launch!

It is also a hugely inefficient way of working which could ultimately reflect badly on you, so it is in your interests to manage it.

In the first instance this should involve agreeing prior to commencing any work a clear set of requirements.

This may seem a blatantly obvious statement, but it is amazing how often I’ve seen it occur where in the ecstasy of winning a contract, the desire and push to get cracking on with the work, a very laisez faire attitude is taken to actually accurately specifying what it is you’re contracted to do.

If you and your customer are completely synchronized in terms of what you are expected to deliver then there is unlikely to be problem, but sadly in these situations from what I’ve seen this is rarely the case and is storing up conflict down the line.

It is also good to be up front agree what the process for a change request is going to be – usually this will be simply a case of your customer requesting a change, and you replying with an estimate, but it’s a small thing that can again reduce the chances of requirements conflict occurring down the line.

The final thing that you can do to significantly reduce conflicts over requirements is to keep a customer fully engaged in your development process with regular reviews prior to your test software being fully issued, as often if a change is simply a minor limit then done at the development stage, the impact may only be a few seconds or minutes.

Going back to the drip feeding situation, encouraging any changes to be batched can also improve efficiency significantly, and spread any housekeeping overhead type tasks across a number of changes.

Never lose sight of your end users

It is unlikely that the person that you are interfacing to on behalf of the customer will actually be the person who will be using the software on a day to day basis, and given that by the time a bit of test software actually gets into production any mistakes or ambiguities in terms of operation can be magnitudes greater than during development, it is very much worth getting test operators involved as early as possible to ensure that the software is usable and understandable to them given they may have a very different level of technical abilities or perspective to yourself.

Often they will spot usability issues that you as the developer may be oblivious to, and will save significant time in training when the software goes into production.

Another massive reason for getting operators involved early is that often their opinion of your test software will carry genuine weight, and certainly many of the test operators I’ve worked with don’t need a second chance to voice their opinions if they’re not satisfied with what they’ve been given so it’s good to make the effort to keep them happy!

Be especially careful here of delivering overly complicated user interfaces – hide anything that is irrelevant or could be confusing to a user. Conversely any data you can display on a GUI to re-assure an operator that the test software is running correctly and give them a warm fuzzy feeling can also be very useful – even if it’s just a table of results that an operator can cast a cursory gaze over is useful, and if things go wrong it can also help them debug a setup potentially without having to get you involved.

Drop the bells and whistles

It has become very obvious through all my years working in test that in many cases, if a company could do without commissioning an automated test system, then they probably would.

In the aerospace world especially, when a company commissions a product they will insist on an automated test solution being provided as part of the contract even though in many cases, and this is certainly true of low volume items, it works out more expensive than employing someone to do this manually. With high volume items the argument against automation rapidly diminishes.

The big reason for this is that it takes the human element out of the equation (making mistakes) and gives a customer trace-ability that all their units have been tested to precisely the same standard which is difficult to guarantee with someone doing it manually.

Because of this, the cost pressures on test are usually significantly greater than other areas of software development and the test budget is almost always the first to be slashed when cost overruns occur.

Therefore, you are unlikely to get any thanks for any bells and whistles and they should be avoided – your mantra as a Test Software Developer should be to do the absolute bare minimum to comprehensively test a device and no more.

There is always scope throughout a test project to over deliver with minimal additional effort – an example of this is if you’re developing a driver of some sort then there is no harm creating a usable, deliverable GUI for it as it is likely to be of little or no overhead, and is likely to benefit your development of the driver and may benefit your end user.

Expect to see your timescales slashed

Working in test, in almost all cases you will be the last in line to get your hands on a fully representative DUT (Device Under Test).

Being at the end of the line it is also likely than any slippage (which is awfully common) in the design or pre-production stage is likely to heap more and more pressure on you to integrate and validate your test software in super quick time – as guess what, your delivery dates haven’t changed!

In the first case, you can mitigate this by ensuring that by the time you get a DUT to integrate and validate your software with, you have done absolutely everything in your power to cover every aspect of the testing where you can get by without a DUT and there are no nasty surprises in the software.

Building in even a very primitive simulation capability with the ability to switch components of the software on and off from the start can significantly aid with this.

It can also be of great help here if you meticulously plan you validation activities, so that when a DUT appears you have a clear idea of what it will take for you to be happy to release the software to production.

Be careful if pressure is put on you to reduce your validation program or push software into production too early

As I’ve already mentioned, being pragmatic and flexible as a Test Software Engineer is never a bad thing, but one area where you should be ultra careful about giving ground is when it comes to integrating and validating your code prior to production.

This is because as soon as that code goes into production use, any changes or bugs are likely to have a far reaching impact as not only is it going to require your time to fix, but also you could well be shutting down an entire production line of many people whilst you get things sorted, not to mention the fact that you will also have to deal with a significant paper trail to get a production approved bit of software up-issued, and the fact that often when this happens, all previously tested DUT’s will need to be re-tested to ensure everyone is tested to the same standard.

It can be a very stressful situation to be in when you have people jumping up and down for you to deliver Test Software to production, but in my experience that kind of heat is nothing compared to the furor you can expect if you release a bit of software into production prematurely which hasn’t been thoroughly validated as a result has to be significantly up-issued in production.

Consider presentation of your software to non-software engineers

It is very easy to take yours and your software colleagues understanding of how a test software package is put together for granted, forgetting that often the person who may be ultimately signing off your software may not be a software engineer themselves, so it is always good to keep this in mind and ensure that your code is understandable to these people.

As I’ve mentioned in a previous blog, this is one area that TestStand really excels over using LabVIEW as most people can easily understand a TestStand sequence without ever having coded one themselves, and this isn’t always the case with LabVIEW code.

Where you have been given a clear step by step measurement requirements, it is worth using the various labeling functionality in LabVIEW and TestStand to clearly link specific code nodes or steps to paragraphs or requirement ID’s provided by the requirements.

Consider test time when planning your validation activities

It is very easy to draw up a tick list of activities to be carried out to validate a piece of test software without fully considering the actual time it is going to take to run a test.

If an end to end test takes say 2 hours, you are probably going to want to see that run faultlessly at least 3 times to consider it to be validated.

If halfway through a run you spot a problem then you have straight away added at a minimum another 2 hours to your validation activities without even taking into account fixing the problem.

If the problem is spotted on the third run, then you are probably going to want to re-run the software 3 times with any mods you have added in – and this soon stacks up.

To mitigate this happening, avoid lengthy test runs where at all possible and break a run up in to smaller chunks.

Consider robustness above all else

Something that is sure to give you a bad name as a test software developer is flaky software (probably true in all development context’).

In an ideal scenerio, a DUT will be put on a test system, tested once never to be seen by it again.

The nightmare scenario (and I have been there myself) is when a DUT goes on test, and the test software regularly breaks down requiring retest after retest until you get a clear run.

In some instances this may be something out of your control: I remember a number of years ago a colleague of mine had used a spurious measurement personality on a instrument that every so often would inexplicably indicate to the test software that it had finished it’s measurement when a look at the screen of this instrument indicated that it definitely hadn’t requiring a complete restart or the instrument, and test.

It’s no exaggeration to say that we lost days if not weeks of test time due to this issue. There was very little we could do about this apart from call in the manufacturer to try and sort it out.

Having said all of this,the situation was helped as following on from the previous section this particular test had a run time of about 16 hours due to the span and resolution requirements, so you can imagine how happy everyone was when it repeatedly kept on falling over!

Obviously things like this you can’t do an awful lot about – picking up the problem before you get to the validation stage is a good start, but where you can increase the robustness of a module of software it is very much worth doing.

One particular strategy for this is to place a block of measurement code within a for loop with a termination terminal wired to the error out of last VI in the sequence set to continue on error, and then set N to how ever many retries you want to do. It takes seconds to do, but can massively mitigate against any occasional blips you might get with instruments which are often the most common source of errors.

Retry Loop

Keep an issues log

A final recommendation is to keep a log of any issues you encounter throughout your development.

It sounds like a trivial, maybe petty thing to do, but if for whatever reason your delivery slips, as can happen – often for reasons outside of your control, then having a accurate list of any genuine problems you have encountered along the way is always going to look a lot better than going into a project review without being able to give any kind of concrete reasons as to why you have slipped, not to mention the fact that many issues within a week of them occurring you will have forgotten!

Obviously it’s one of those things that you hope you will never have to use, but is worth doing just in case.

I shall update this blog if anything else comes to mind (kind of feels like a bit of a work in progress), but I feel a quick session on Elite Dangerous coming on before bed and various activities with the kiddies tomorrow.