Thoughts on being a Good Test Software Engineer

I’ve lost count of how many projects I’ve worked on over the past 15 years – most have been good, some have been bad, then there’s the odd one that makes you truly want to turn your back on any kind of software development and go and apply for that oh so stress free looking shelf stacking job at Sainsbury’s!

Most of them have been in a test role in one way or another, which is kind of funny as for a long time I was desperate to make my way into another sphere – ideally development in Java or C# both of which I’ve done a reasonable amount of, but have always been pulled back to test – and in particular Aerospace test which should be considered the context for this blog.

More recently though, I have come to appreciate more and more that I actually love being a Test Software Engineer; it’s not like you can pull out your mobile phone and show off an app or website that you have worked on to your peers or friends, but what you get is maybe the even bigger satisfaction of being able to say that although you may have been a very small cog, you have helped to put a physical satellite in orbit (the buzz of seeing a rocket launch a satellite you worked on is unbelievable!), or a plane in the air that works perfectly which somewhere in the chain you contributed to.

To put it simply, being a Test Software Engineer can be incredibly fulfilling, and is not as I was once told by a rather aloof colleague, a career choice for failed software engineers.

It can be every bit as challenging, interesting, and rewarding as doing say pure application development in C++, Java or C# all of which I also have a reasonable amount of experience in: And best of all, if you’re a LabVIEW FanBOY like myself, there really isn’t (again this is only speaking from my personal experience so feel free to disagree) a better language to develop Aerospace test software in.

This is not just because of the speed advantages of developing in LabVIEW, but because these speed advantages are also compounded by the massive amount of third party and mostly free driver code which means that for most modern instruments from the likes of Keysight, Rohde and Schwarz and Tektronix, their prime driver support is now for LabVIEW. Add TestStand into the mix, and you will struggle to find a more effective test software development tool chain.

Digressing back to the title of this blog, if you choose (or fall!) in to being a Test Software Engineer then here is a few tips for being successful and things that might help you get on and I certainly feel have helped me become a half decent Test Software Engineer:-

Expect and plan for Change

I’ve worked on a couple of projects where there has been a real siege mentality and huge resistance with regard to requirement changes, and it has been a massive point of conflict within the project team.

From a Test Engineers point of view the main conflicts in my experience have come when a customer doesn’t fully appreciate that even a minor requirement change can entail and awful lot of work if you have a full audit trail of specifications, validation procedures etc, regression testing and release actions going on as is often the case in Aerospace.

Certainly trying to explain to a customer why changing say a single limit change causes hours of work when it only takes seconds to actually change the value in the code can be a difficult sell, but due to the incredibly tightly config controlled aerospace world, this is often the reality.

It is a situation which in many cases you will not be able to get away from, in any sort of concurrent development environment where you are developing test software alongside the product requirements changes are going to happen and being overly stubborn about this is never going to be very productive or put you in a good light.

You can though help yourself immensely by planning, expecting, and even predicting where changes will occur so that when the inevitable requirement changes come through you are in the best possible position to react outside of any organisation level change machinations.

In practice from a LabVIEW perspective this means keeping your software nicely modular and loosely coupled as is a general rule of software development, and externalising variables with the ideal situation being that if a change comes along then it may be a case of simply updating a spreadsheet or csv file with no recompilation of the code required.

Be prepared to dig your heels in over requirement changes at times

Yes – I have completely contradicted myself! But there is a subtle difference between what I said in the last section and this section.

Whilst it is always good to plan for change and being flexible, being a complete push over to a customer with regards to requirement changes can also make your life very difficult, and this is especially the case if the requirements are drip fed. I’ve been in this situation before, and you can end up chasing your own tail where before you’ve got one change closed off another one comes along and your version numbering starts resembling the speedo on the space shuttle at launch!

It is also a hugely inefficient way of working which could ultimately reflect badly on you, so it is in your interests to manage it.

In the first instance this should involve agreeing prior to commencing any work a clear set of requirements.

This may seem a blatantly obvious statement, but it is amazing how often I’ve seen it occur where in the ecstasy of winning a contract, the desire and push to get cracking on with the work, a very laisez faire attitude is taken to actually accurately specifying what it is you’re contracted to do.

If you and your customer are completely synchronized in terms of what you are expected to deliver then there is unlikely to be problem, but sadly in these situations from what I’ve seen this is rarely the case and is storing up conflict down the line.

It is also good to be up front agree what the process for a change request is going to be – usually this will be simply a case of your customer requesting a change, and you replying with an estimate, but it’s a small thing that can again reduce the chances of requirements conflict occurring down the line.

The final thing that you can do to significantly reduce conflicts over requirements is to keep a customer fully engaged in your development process with regular reviews prior to your test software being fully issued, as often if a change is simply a minor limit then done at the development stage, the impact may only be a few seconds or minutes.

Going back to the drip feeding situation, encouraging any changes to be batched can also improve efficiency significantly, and spread any housekeeping overhead type tasks across a number of changes.

Never lose sight of your end users

It is unlikely that the person that you are interfacing to on behalf of the customer will actually be the person who will be using the software on a day to day basis, and given that by the time a bit of test software actually gets into production any mistakes or ambiguities in terms of operation can be magnitudes greater than during development, it is very much worth getting test operators involved as early as possible to ensure that the software is usable and understandable to them given they may have a very different level of technical abilities or perspective to yourself.

Often they will spot usability issues that you as the developer may be oblivious to, and will save significant time in training when the software goes into production.

Another massive reason for getting operators involved early is that often their opinion of your test software will carry genuine weight, and certainly many of the test operators I’ve worked with don’t need a second chance to voice their opinions if they’re not satisfied with what they’ve been given so it’s good to make the effort to keep them happy!

Be especially careful here of delivering overly complicated user interfaces – hide anything that is irrelevant or could be confusing to a user. Conversely any data you can display on a GUI to re-assure an operator that the test software is running correctly and give them a warm fuzzy feeling can also be very useful – even if it’s just a table of results that an operator can cast a cursory gaze over is useful, and if things go wrong it can also help them debug a setup potentially without having to get you involved.

Drop the bells and whistles

It has become very obvious through all my years working in test that in many cases, if a company could do without commissioning an automated test system, then they probably would.

In the aerospace world especially, when a company commissions a product they will insist on an automated test solution being provided as part of the contract even though in many cases, and this is certainly true of low volume items, it works out more expensive than employing someone to do this manually. With high volume items the argument against automation rapidly diminishes.

The big reason for this is that it takes the human element out of the equation (making mistakes) and gives a customer trace-ability that all their units have been tested to precisely the same standard which is difficult to guarantee with someone doing it manually.

Because of this, the cost pressures on test are usually significantly greater than other areas of software development and the test budget is almost always the first to be slashed when cost overruns occur.

Therefore, you are unlikely to get any thanks for any bells and whistles and they should be avoided – your mantra as a Test Software Developer should be to do the absolute bare minimum to comprehensively test a device and no more.

There is always scope throughout a test project to over deliver with minimal additional effort – an example of this is if you’re developing a driver of some sort then there is no harm creating a usable, deliverable GUI for it as it is likely to be of little or no overhead, and is likely to benefit your development of the driver and may benefit your end user.

Expect to see your timescales slashed

Working in test, in almost all cases you will be the last in line to get your hands on a fully representative DUT (Device Under Test).

Being at the end of the line it is also likely than any slippage (which is awfully common) in the design or pre-production stage is likely to heap more and more pressure on you to integrate and validate your test software in super quick time – as guess what, your delivery dates haven’t changed!

In the first case, you can mitigate this by ensuring that by the time you get a DUT to integrate and validate your software with, you have done absolutely everything in your power to cover every aspect of the testing where you can get by without a DUT and there are no nasty surprises in the software.

Building in even a very primitive simulation capability with the ability to switch components of the software on and off from the start can significantly aid with this.

It can also be of great help here if you meticulously plan you validation activities, so that when a DUT appears you have a clear idea of what it will take for you to be happy to release the software to production.

Be careful if pressure is put on you to reduce your validation program or push software into production too early

As I’ve already mentioned, being pragmatic and flexible as a Test Software Engineer is never a bad thing, but one area where you should be ultra careful about giving ground is when it comes to integrating and validating your code prior to production.

This is because as soon as that code goes into production use, any changes or bugs are likely to have a far reaching impact as not only is it going to require your time to fix, but also you could well be shutting down an entire production line of many people whilst you get things sorted, not to mention the fact that you will also have to deal with a significant paper trail to get a production approved bit of software up-issued, and the fact that often when this happens, all previously tested DUT’s will need to be re-tested to ensure everyone is tested to the same standard.

It can be a very stressful situation to be in when you have people jumping up and down for you to deliver Test Software to production, but in my experience that kind of heat is nothing compared to the furor you can expect if you release a bit of software into production prematurely which hasn’t been thoroughly validated as a result has to be significantly up-issued in production.

Consider presentation of your software to non-software engineers

It is very easy to take yours and your software colleagues understanding of how a test software package is put together for granted, forgetting that often the person who may be ultimately signing off your software may not be a software engineer themselves, so it is always good to keep this in mind and ensure that your code is understandable to these people.

As I’ve mentioned in a previous blog, this is one area that TestStand really excels over using LabVIEW as most people can easily understand a TestStand sequence without ever having coded one themselves, and this isn’t always the case with LabVIEW code.

Where you have been given a clear step by step measurement requirements, it is worth using the various labeling functionality in LabVIEW and TestStand to clearly link specific code nodes or steps to paragraphs or requirement ID’s provided by the requirements.

Consider test time when planning your validation activities

It is very easy to draw up a tick list of activities to be carried out to validate a piece of test software without fully considering the actual time it is going to take to run a test.

If an end to end test takes say 2 hours, you are probably going to want to see that run faultlessly at least 3 times to consider it to be validated.

If halfway through a run you spot a problem then you have straight away added at a minimum another 2 hours to your validation activities without even taking into account fixing the problem.

If the problem is spotted on the third run, then you are probably going to want to re-run the software 3 times with any mods you have added in – and this soon stacks up.

To mitigate this happening, avoid lengthy test runs where at all possible and break a run up in to smaller chunks.

Consider robustness above all else

Something that is sure to give you a bad name as a test software developer is flaky software (probably true in all development context’).

In an ideal scenerio, a DUT will be put on a test system, tested once never to be seen by it again.

The nightmare scenario (and I have been there myself) is when a DUT goes on test, and the test software regularly breaks down requiring retest after retest until you get a clear run.

In some instances this may be something out of your control: I remember a number of years ago a colleague of mine had used a spurious measurement personality on a instrument that every so often would inexplicably indicate to the test software that it had finished it’s measurement when a look at the screen of this instrument indicated that it definitely hadn’t requiring a complete restart or the instrument, and test.

It’s no exaggeration to say that we lost days if not weeks of test time due to this issue. There was very little we could do about this apart from call in the manufacturer to try and sort it out.

Having said all of this,the situation was helped as following on from the previous section this particular test had a run time of about 16 hours due to the span and resolution requirements, so you can imagine how happy everyone was when it repeatedly kept on falling over!

Obviously things like this you can’t do an awful lot about – picking up the problem before you get to the validation stage is a good start, but where you can increase the robustness of a module of software it is very much worth doing.

One particular strategy for this is to place a block of measurement code within a for loop with a termination terminal wired to the error out of last VI in the sequence set to continue on error, and then set N to how ever many retries you want to do. It takes seconds to do, but can massively mitigate against any occasional blips you might get with instruments which are often the most common source of errors.

Retry Loop

Keep an issues log

A final recommendation is to keep a log of any issues you encounter throughout your development.

It sounds like a trivial, maybe petty thing to do, but if for whatever reason your delivery slips, as can happen – often for reasons outside of your control, then having a accurate list of any genuine problems you have encountered along the way is always going to look a lot better than going into a project review without being able to give any kind of concrete reasons as to why you have slipped, not to mention the fact that many issues within a week of them occurring you will have forgotten!

Obviously it’s one of those things that you hope you will never have to use, but is worth doing just in case.

I shall update this blog if anything else comes to mind (kind of feels like a bit of a work in progress), but I feel a quick session on Elite Dangerous coming on before bed and various activities with the kiddies tomorrow.

Dave

 

 

 

 

 

 

 

 

Adding TCP/IP Functionality To Actor Using Abstract Message Classes

As anyone who has read my blog previously, you will know that I am a big fan of Actor Framework. It is a fantastic way of creating highly robust and scalable asynchronous process applications.

Having said all that, my biggest reservation about Actor has always been that because of the way the messaging between Actors works, out of the box Actors are strongly coupled. In a lot, maybe most instances this is not an issue as you may be putting together an application whereby the different Actors benefit from being strongly coupled or there is no benefit to decoupling them, but I’ve also come across a few use cases where I want to be able to completely decouple individual Actors for testing, and to allow multiple people to work on the same application.

One solution to this is abstract message classes, and one of the places where I’ve found this comes in really useful is adding a TCP/IP socket onto an existing Actor.

Initially this came from working on a project where I’d created a no abstract messages TCP/IP Actor, but then another project came along that also required TCP/IP control so I refactored my TCP/IP Actor with abstract messages to re-use it.

How this worked in practice is that in the original coupled TCP/IP Actor to communicate with its caller, I enqueued two messages to report the TCP/IP Actors status in terms of clients connected, and also to communicate messages received to the caller. The caller was then able to ‘Do’ these messages, and enqueue a message on the TCP/IP Actor to communicate back to a client.

The process to convert a non-abstract message to abstract is fairly straightforward using the following process:-

  • Create your message as usual using the Actor Framework Message Maker
  • Remove the ‘Do’ and ‘Send’ methods from the Message Class.
  • In the class properties, under the ‘Inheritiance’ Category, check ‘Transfer all Must Override requirements to descendant classes’.

Transfer Override

  • To understand why you need to do this, open Message.lvclass from the ActorFramework directory under vi.lib. Under Item Settings in the class properties, if you select ‘Do.vi’ you will notice that the checkbox ‘Require descendant classes to override this dynamic dispatch VI’ is checked. This forces any classes that inherit from Message.lvclass to have to implement this method, by using ‘Transfer all Must Override requirements to descendant classes’ it removes this requirement from the class you are inheriting into, but pushes the requirement down to any child classes of itself and allows the class to compile.

message class

  • Next you need to expose any properties of the Abstract Message, as we no longer have a send method to encapsulate these (although there’s no reason why you couldn’t create a new one and pass in the class type) so we will have to set them via property nodes and will use the Enqueue.vi directly.

Abstract Messages

  • Because the Do method has been removed, if the TCP/IP Actor wants to send the message to its caller, there won’t be anything to execute, so now you have to create a message class that inherits from the abstract class and create an instance of the Do method. This will be specific for the particular caller Actor you want to use it with.

Overridden Abstract Messages

  • Now you’ve inherited these Concrete classes from the Abstract classes, you now need to tell the TCP/IP Actor to use these classes at runtime, I personally tend to do this using a Constructor and passing in the concrete classes to be held in private data.

Constructor - defining Concrete classes

  • This means that now, when you want to send what was originally a specific caller message, you now have to pull what class the caller specified to use out of private data, populate its properties and enqueue it on the callers message queue.

Using concrete class

  • You obviously also need to go back to your new Concrete classes and populate their Do methods for anything to happen.

Exploring the demo

If you follow the link below, you can find a zip file of a TCP – IP Test Actor, interfacing to my abstract TCP/IP Actor with abstract messages.

https://www.dropbox.com/s/jd6kw67j3p7uhzm/TCP%20-%20IP%20Remote%20Control%20Actor.zip?dl=0

Where to start.

Open up TCP – IP Actors.lvproj.

This contains 3 lvlibs – a TCP – IP Client GUI Actor, the TCP – IP Remote Control Actor, and a TCP – IP Test Actor.

Start by running Main.vi from TCP – IP Test Actor.lvlib to bring up the Test Actor.

TCP Test Actor

Then run Main.vi from TCP – IP Client GUI Actor.lvlib.

Remote Client GUI

On the TCP – IP Client GUI the available commands are populated from TCP – IP Remote Control Actor\TCP – IP Client GUI Actor and are in a tsv format.

From the Remote Client GUI you should first of all click ‘Open’ to begin a session with the TCP – IP Remote Actor. You should notice that as well as the light on the Remote Client GUI lighting up, you should also see the Connections light on the TCP – IP Test Actor GUI also light up, and the No. Clients indicator show 1. NOTE: you can now open up another Remote Client GUI and do the same again, and this should increment to 2.

Now a session is open, you can select commands from the pulldown, and populate the parameters and these will update those on the TCP IP Test Actor, or alternatively if you use the GetNumeric, it will return the value of the Numeric Indicator.

The key to all of this working is the TCP – IP Test Actor.lvclass:Handle Remote Command.vi. This is called from the Do method of the Concrete class Remote Command handler MSG.lvclass, and there is an equivalent VI called from the Do method of Remote Connection Status Msg.lvclass.

Handle Remote Command

You will also find a VI called ‘Basic Example.vi’ under TCP – IP Remote Control Actor.lvlib\Test that demonstrates how to make single writes to the remote actor.

There’s probably not much more to add so please have an explore of the code, and message me if you have any questions, problems, or recommendations. Feel free to re-use any parts of this demo, as by design you should be able to pick up the client GUI and remote control Actor and attach it to any of your own Actors, but please remember give me credit if you do 🙂 Thanks, Dave.

GMAP Example

I’ve had a few requests now to provide an example of using GMAP based on my previous post on the subject, so here is a very quick and dirty example that just allows you to select a map provider, and then generate a route.

You’ll probably see straight away from the messiness of the code why I decided to create a LabVIEW friendly .NET sub panel rather than code it all in LabVIEW, and I’ve actually created this by transposing my C# code just to show that it can all be done in LabVIEW if necessary.

Sadly I don’t have another PC that I can verify this on to check that it’s linking into the .dlls correctly – if it does throw a strop it might be worth moving the dll’s to the system32 directory.

gmap

Link to files :-

https://www.dropbox.com/s/0571i7h7low2ain/GMAP%20Example.zip?dl=0

Learning To Love TestStand

Most of my blogs so far have been very LabVIEW focused – not surprisingly as it is the tool I make the most use of in my day to day work, and of all the languages I do is the one I am the most fluent in and feel most comfortable using.

I consider it to be the best tool currently out there for doing test, control and measurement and certainly in the National Instruments (NI) world it is the ‘prima donna’, getting more exposure than probably all of NI’s other tools put together.

There is however another tool in the NI stable that is arguably the real star of the show and beats LabVIEW (and anything else I can think of) hands down when it comes to the particular task of test sequencing and that is TestStand.

 

ts

TestStand Sequence Editor

 

It is a tool that as far as I’m aware is almost unique in the world – out of the box it allows highly complex sequences to be built up using LabVIEW, .NET and many other languages, a wide variety of synchronisation options such as while loops, if statements, delays etc. The ability to utilise multiple test sockets and manage them concurrently or individually. Full user management capabilities, the ability to load in limits from file or database, and a mind boggling array of reporting options.

This only scratches the surface of what TestStand can do and there is simply nothing else on the market that even gets close to it with just these features.

It goes further though, as if out of the box TestStand doesn’t do exactly what you want it to do, there are almost infinite ways in which you can customise and interface to it without losing that core sequencing ability – it is a marvel of software engineering, but one that does it’s business in the shadow of it’s far louder, brasher, more exciting brother.

What is so strange about me saying all of this is that at one point in my life I absolutely loathed TestStand, and couldn’t see why you’d want to use it over LabVIEW or .NET for instance.

My journey from hater to lover was a difficult one. I was working on a project testing a mega complex satellite payload equipment. Throughout the project I was on an almost vertical learning curve in many aspects – it was the first satellite payload I’d worked on, the first time I’d used much of the instrumentation being used, and all this working in a new company.

We were to use TestStand and LabVIEW to develop the test software, with test planning, and results handling being done by a very clever .NET application which interfaced to TestStand but not LabVIEW. I didn’t have much experience with TestStand at the time, but plenty with LabVIEW having already been a CLD for a couple of years and having recently passed my CLA, and having a certain aversion to TestStand I wanted to minimise my use of it to simply call large single blocks of LabVIEW that did a test and returned a big cluster of results that TestStand then reported to the test database.

As a pure bit of LabVIEW software I was very pleased with it. I’d made significant use of the newly released Object Orientated features of LabVIEW and created what I felt was a very well structured, lean and robust bit of code.

For the most part I was right, but I was soon to discover that doing it this way had a number of drawbacks all of which could have been avoided by using TestStand.

The first was that as much as LabVIEW block diagrams are easy to read for those of us who use the language on a daily basis, trying to demonstrate to RF engineers who have never used LabVIEW that you have implemented their procedure to the letter in a VI with looping etc was not easy, TestStand’s intuitive interface was a far better environment for doing this.

The second was that I had configured all my limits in LabVIEW – either hardcoded in VI’s (virtual suicide), or read from ini files. It seemed like a good idea at the time, but then the requirements changed, and it meant that rather than just modifying a completely separate file, I had to actually open the code, modify a limits VI, and then rebuild the software – utterly nuts looking back! The way I should have done it, and the way I have been doing it ever since is to configure limits within TestStand and then use the in-built property loader to export and import values at runtime and pass these into the LabVIEW code modules.

The third was that dealing with highly precise measurements, our RF engineer friends would go through the software with a fine toothcomb wanting to examine in minute detail every step of the test and would make requests like “could you run that step again with a different value”….My answer was no, as you can’t change values on the fly in LabVIEW – they are what they are. This particular problem was exacerbated by the fact that a couple of more experienced colleagues (and very smart ones at that) had done all of their test sequencing within TestStand, so they could do all of this.

A final reason was that with TestStand being the entry point and reporting tool, my clever colleagues with their detailed test sequences had oodles of detail in their test reports which meant every instrument setting, and returned result was there to be analysed. My sequences with my single step call to a LabVIEW VI had, well virtually none. It meant that whilst when one of their tests went wrong they had a raft of data to pour through to identify the problem, my software had virtually none, which meant that when the inevitable debate that usually eschews from this sort of thing of ‘is the problem with the DUT, instrumentation, bad calibrations, or your dodgy software’ kicked off I had no answer or evidence to prove that it wasn’t my software that was the problem.

I could probably add other reasons to my list of how using TestStand for sequencing rather than LabVIEW would have saved me a lot of pain and embarrassment, and I’ve probably made myself look like a bit of a dumbass to anyone reading this – maybe I was, but I’m fairly good at taking on-board criticisms (often from myself more than others!) and learning lessons from mistakes I’ve made.

I suppose there are two points here – the first is obviously related to the title of the blog, and that is that TestStand is an amazing sequencing tool in any kind of test environment, and has many advantages over using LabVIEW in this context. If you’re a die-hard LabVIEW developer, especially one who does a lot of sequencing, or is likely to then TestStand could save you an awful lot of pain – and even if you dislike it at first like so many LabVIEW developers do, I am certain that like me you will learn to love it.

The second point, and this is a more philosophical angle, is that the biggest lesson I learned from this episode is that regardless of how much faith or love you have for a particular tool, keeping an open mind to the fact that there may be better tools out there to use in certain contexts is a healthy state of mind to have, and having that attitude, and then learning to use those other tools will often reap far greater benefits than trying to make the ubiquitous square peg fit into a round hole.

Another Project Almost Out Of The Door!

I’ve been a bit quiet on the blogging front for the past few months mainly due to being insanely busy at work – why is it that so many projects have a delivery date in August or September meaning you end up losing half your summer – or is that just my experience?

What I’ve been working on has been possibly the most interesting job I’ve ever had the pleasure of doing – it’s one of those where you can’t distinguish whether what you are doing is a job or a hobby and it consumes your entire life – but in a good way!

Back in March we got a lead regarding a university looking to put together a complete car environmental simulation system whereby you would drive a mass produced car into a simulation environment that would provide a 360 degree screen to simulate the visual environment, a system to simulate the actual operation of the car without it actually running i.e. you push the accelerator pedal, and the rev counter responds as if you were driving for real, and the bit that we were interested in which was to provide signals to stimulate the car infotainment system. It was a project we desperately wanted to win purely from an interest point of view, and happily we did.

Infotainment appears to be quite a regularly used buzzword in the automotive industry these days, and fairly obviously merges the words ‘Information’ and ‘Entertainment’. It essentially denotes the fact that whereas even just 10 or 20 years ago a standard car would have just a radio (possibly with RDS) and CD player and that was about it, and if it was a bit more upmarket, or a slightly electrically suspect French car it would also have a seperate car health check and management system that would inform you of when your car needs a service, or if a bulb is out.

The trend now is towards ever more integrated car control and entertainment systems that as well as a stereo now have built in GPS, USB, Wireless, Bluetooth for your phone and there appears to be no sign of this trend letting up with car manufacturers on the one hand looking to pack more and more technology and connectivity into their cars, and also governments legislating to make cars become ever more clever and safe through technology.

In the context of this project, our challenge was to provide a system that could stim a car or provide connectivity for it with AM, FM, DAB, DVB-T, Bluetooth, Zigbee, GPS, WLAN, ITS, BroadRReach, CAN, LIN, FlexRay and also carry out tasks such as simulating a mobile phone call or text.

Almost all of these standards are already covered by National Instruments toolkits, so we were able to make use of their GPS Toolkit, FM-RDS, and Modulation Toolkit for FM, AM, and off the shelf drivers for CAN, LIN and FlexRay.

For DAB and DVB-T we were able to use Maxeye’s DAB toolkit, for simulating Mobile comms we made use of an incredible bit of software called Eggplant, and for the rest we used a mix of bespoke drivers and .NET libraries.

Hardware wise we used 3 PXI Chassis containing signal generators, a VST, and serial and DAQ interfaces.

For the software we made what some people may consider to be quite a bold choice and used Actor Framework. As I have alluded to before on this blog it is quite a divisive topic among LabVIEW users who either seem to hate it and won’t touch with a barge pole or absolute love it – or to be totally accurate, love it for a few weeks, hate it for a week or two, and then love it again; what I am maybe trying to say is that it is not without its frustrations at times!

Overall though, I think it’s fantastic – although I find it quite difficult to articulate why I feel this way about it.

Almost all of what you can do with Actor could be achieved with daemons, queues, notifiers and user events, what I would say makes Actor so appealing is the elegance with which you can develop very complicated, multi-process, yet utterly robust architectures with rock solid inter-process communication whilst actually writing very little code yourself. It is also incredibly easy to make an Actor Framework fully remote controllable (again with very little additional code), and create highly re-usable components using abstract message classes.

An example of this elegance is in terms of error handling. A typical way I have done this before would be to have a single error handler daemon that is then passed errors from other processes via a queue. you would then have to insert an error handler VI in every process to transmit these error to the daemon. Works fine, great. With Actor the way you do this is to create a base class that inherits from Actor, and then override the Handle Error.vi method to decide what to do with certain errors. You then inherit this class into all your other Actors and without writing an additional code all of your asynchronous processes are using a robust and uniform error handling strategy – and what’s more it is then very easy to escalate errors to say your main Actor, and if necessary to carry out recovery actions or stop/restart processes with ease.

It’s something that is possibly a blog in itself, and it’s one of those things that you will either get it or not – certainly not the be all or end or of how to do things, but it works for me.

On top of using Actor Framework, as we were using multiple PXI chassis and had various bits of software running on these chassis to physically out put signals – an example of this was the GPS. We wanted to be able to control much of the GPS settings from our main GUI, but our actual GPS transmitter (RFSG) was on a different PXI chassis we had to decide on a scheme as to how we would communicate with these remote systems.

There were a number of options for this – the obvious ones being TCP/IP, Network Shared Variables, or Network Streams. A further option that came to light was LNA’s (Linked Network Actors). This network stream based library allows you to communicate between Actors across a network, and after some testing we decided to use these.

As with the rest of Actor it has proved to be absolutely bullet proof, and in over 5 months of development we have not as far as I am aware seen a single dropped transaction.

In terms of issues as with any project we have encountered a few. The biggest one without a doubt though has been that we have at times experienced horrendously slow performance at edit time. This has been partly unavoidable as we have been using some very big toolkits (the GPS and DAB especially are huge which given what they are doing is hardly surprising).

Undoubtedly though, some of this performance was due to us using Actor, and specifically the fact that whilst Actor out of the box gives you incredibly strong cohesion between processes due to the use of messaging classes, it also means it can be difficult to loosely couple components, and for instance we were finding that would would load the GPS Actor, and it would also pull in all its sibling Actors such as Radio, Zigbee etc.

There is a solution though, and that is to use Abstract Message classes, and I wish I’d have done more research into this before I started the project as it isn’t a new thing with Actor, and when we finally got around to implementing this we saw a 10 fold, maybe more increase in load times, and certainly if I get another project suitable to use Actor on then I will implement this from day one.

So anyway, this is some screen dumps of the (almost) finished product if anyone is interested and to put in context some of what I’ve said in this post.

 

Main GUI

Main GUI

 

Radio GUI

Remote Radio GUI

 

Remote GPS GUI

Remote GPS GUI

Using Time-Delayed Send Message.vi to implement a basic state machine in Actor Framework

Hello.

I’ve had an interesting problem to solve at work this week involving creating, buffering, and then transmitting GPS signals using an NI RFSG but being able to alter those signals on the fly.

For the project as a whole we committed to using the Actor Framework early on as it is quite a complex asynchronous process application with significant inter-process communication with a strong requirement to be able to chop and change modules as well as adding new ones in at run time.

This admittedly could also be done using daemons, but the benefits of Actor such as the robust inter-communication, ease and speed of putting an infrastructure in place and the ability to very easily make an Actor Framework remote controllable and run headless via TCP/IP (which is a very common request from clients) made it a perfect fit for this job.

However, this particular problem also had state machine written all over it, and having tried to use Actor before to do state machines where you have a different VI for each state I have found that they very quickly become very difficult to debug and follow so I thought I’d have a crack at putting together something that would provide all the benefits of Actor and a single VI with a state machine and this is a demo I came up with to prove the design.

It is certainly not the most inspired demo you will ever see, it is simply an overridden Actor Core GUI with a FSM running behind it that simply cycles through states to set 4 colour boxes on the GUI and allow you to start/stop the cycling, and change the interval between each box being set.

Image

GUI

On the GUI We have a State indicator which is set from the state machine itself and a Play, Stop, Reset Button, and FSM Interval(ms) control that trigger events in the Actor Core override and enqueue message handlers to be executed on the core.

Image

Actor Core Override

 

project

Project

 

Image

Actor Private Data

 

The FSM itself is a single VI that holds its state information in the Actor private data and simply iterates from an idle state to 4 states which set the colour of each color box with a one iteration delay between each.

Image

Actor FSM

Now the cool bit, and one of the reasons I really like Actor. To run this as a state machine you use the Time-Delayed Send Message.vi to instruct the Actor Core to enqueue the state machine at certain intervals.

There are actually two ways you can do this, you can use the Time-Delayed Send Message.vi with the iterations option set to to 0 – this will enqueue the state machine at a set frequency ad-infinitum which is great, but I personally prefer the option to use Time-Delayed Send Message.vi with the iterations option set to 1 which means it will do a single enqueue of the specified message with a time delay which I do once in the overriden Actor Core to get the process started, and then at the end of every iteration of the FSM.

My main reason for this preference are primarly debug; If you use the infinite enqueue option (0) and you run the vi that is being enqueued with execution highlighting switched on, messages do not stop getting en-queued in the background so as soon as you turn off highlighting all those messages that have been en-queued will get executed one after another without any delay.

Another reason for my preference is that by en-queuing on each iteration it also means that you can alter you interval rate, so for instance when the state machine is in IDLE I enqueue an element with a delay of 1 second, then when the play button is pressed the interval is set from the from panel FSM Interval control – it is a neat little feature that I can think of many uses for i.e. the classic CLD car wash exam where you have to time each state using an elapsed time vi – well here you could simply use constants for going between states. I have also used Time-Delayed Send Message.vi quite extensively for data logging use cases where i have found it’s time to be remarkably accurate.

FSM State Enqueuer

FSM State Enqueuer

In terms of the event handlers, well they simply set the next state of the FSM (but could obviously do a lot more) – another really nice feature of Actor in this context is that because all messages are executed in-line with the FSM it makes it very easy to debug and integrate user events with the state machine – although one thing to watch out for here is that if you are putting very small or possibly no delays between your FSM being en-queued for execution, it is worth giving you Event Handler messages high priority to ensure they always get executed as soon as an iteration of the FSM has finished or before if it is in a delayed state.

Play event handler

Play event handler.

Link below for project – possibly obvious, but run Main.vi to get started – LabVIEW 2013.

Actor Framework Basic FSM Example Project

Hopefully it’s been of use to someone somewhere, and thanks for reading.

Dave

 

 

Are Professional Certifications Really Worth The Hassle?

A brief blog post….

I am currently sitting on my sofa with way too many windows open on my laptop attempting to cram in as much information as possible for a multiple choice exam tomorrow that if I pass I will be 1/5 of the way toward becoming a National Instruments Certified LabVIEW Embedded Developer (CLED) – the other 4/5ths involve doing a 4 hour practical exam.

I need a bit of respite from the revision, so writing this is it!

This is a new exam being offered by National Instruments and has been born out of the fact that due to the distinct nature of embedded development in the LabVIEW environment having a CLD (Certified LabVIEW Developer) or CLA (Certified LabVIEW Architect) qualification doesn’t really cut it, and supposedly there have been a few issues with CLD’s and CLA’s rolling into companies proclaiming to be LabVIEW experts, working on embedded projects, and royally screwing up which isn’t only bad for the person who has screwed up, but is also not a very positive advert for the CLD and CLA community in general if someone who has jumped all the hurdles, and gone through all the pain in getting the qualifications then cannot actually do the job.

This isn’t all that surprising though. Developing for an embedded target be it a real time controller or an FPGA using LabVIEW involves first throwing out a huge swathe of best practices and habits learned over many years, and returning to basic principles such as working in bits and bytes, configuring registers, managing or at least considering memory yourself, and understanding timing constraints that are more akin to developing in say embedded C – so I think it’s a very positive thing that NI have devised this exam to test people in an embedded environment.

As part of this, NI have invited people from their alliance program to take part in a training course to prep people for the exam taught by the ever brilliant and knowledgeable Mike Bailey, and this is what I have been doing over the past couple of days with 11 other people from around the UK and Europe with all of us desperate to add to the current 7 CLED’s worldwide of which I believe only one is within Europe.

How do I rate my chances? Well it’s not looking too good right now. I’ve just finished doing the sample multiple choice paper and got a respectable 65%, but 5 short of the 70% needed to get through to the practical exam. My weaknesses are fairly clear to see; the exam is based on the real time and FPGA courses – of which I did the prior about 11 years ago, and the latter I did last summer – so no surprise that I only dropped a couple of questions in the FPGA section, and the rest of the marks in the real time section – I have a lot to learn tonight!

As ever when doing these things the question also pops up in my mind of whether all the hassle of doing certifications is really worth it i.e. the late nights revising, the stress, the time away from work taking courses, and the overall effort it takes to maintain a certification once it has been initially achieved –  I mean, I know many software engineers far more successful than I think I’ll ever be and they wouldn’t touch certifications with a barge pole!

I initially decided to go the certification route about 10 years ago. The company I was working for at the time was keen on getting their employees certified and as a young graduate looking to learn and gain expertise as well as to gain the respect of his peers and improve his employment prospects it seemed like a good way to go, and I think gaining the CLD in 2004 and then the CLA for the most part achieved all of this, but 10 years on with almost 13 years of commercial development under my belt is it still really worth me maintaining existing certifications and acquiring new ones when have so much experience I can turn to?

My answer to all these questions (and this is on a personal level and not a rule I feel should be applied to everyone)  is yes – for a number of reasons as follows:-

  • The certifications move with the products, and I believe that to survive as a software developer in most cases you have to keep up to date with the latest and greatest to keep a competitive edge renewing certifications and having to learn or refresh yourself on certain techniques is a great way to do this.
  • No job is forever, and I certainly don’t take my job for granted – we may be busy now where I work and I love the job, but things change rapidly; a few barren months or an undesirable change within the company that causes me to no longer enjoy working there and I could find myself back out on the job market where if things are still recessionary then every way I can differentiate myself from other candidates is welcome, and certification is definitely one way of doing this.
  • Customers like certifications, and it is a good marketing tool to be able to proclaim a number of certified people when bidding for work to give potential clients a certain amount of confidence that if they award you a contract then you have the skills to fulfill it – you’re low risk.
  • I like to have a sense that I am doing a job properly, and by following a certification track and learning best practices and understanding the various pitfalls of doing things a certain way this give me extra confidence that I am doing this. Even if I don’t necessarily always agree with every best practice or guideline, it is a good reference point and if anything, at least makes you think about things a bit more.
  • A major driver for me is simply the fact that I enjoy learning, but find it very difficult to do this without having a target to aim towards or a structure within which to do it, and certifications give me both of these.
  • Dare I say it, but like everyone I have an ego and it’s a nice feeling to be able to say you’ve achieved a qualification and to be able to add a few acronyms to the end of your name or on your LinkedIn profile and I like to think that it gives me a certain amount of authority when being consulted on LabVIEW or TestStand issues – it’s a nice way to re-assure people from the start that you know what you’re talking about.

Are there any drawbacks of certification (apart from the stress, late nights of revision etc!)? Again I’d say the answer to this is yes:

  • For one, there is a danger of Pigeon Holing yourself. I’ve done an awful lot of C, C++, C# Java, embedded etc over my career, but I am known as a LabVIEW and TestStand Architect, and certainly when I have interviewed for non LabVIEW or TestStand jobs this has come up as a problem because people view me as so closely aligned to these development tools that they question my other skills or assume that I am probably at best average when I can provide a decent amount of evidence that I am and would be more than competent in these other languages.
  • Being perceived as an expert can also sometimes be a problem with managing peoples expectations. Some people (mainly non tech savvy) wrongly assume that because you have a certification then you must have super powers and are able to tackle all problems in super fast time and produce consistently top quality output. I obviously strive for all of these qualities, but the truth is that just having a certification doesn’t instantly mean I always achieve all of this or will always be more efficient or deduce a better solution than someone who isn’t certified or won’t make mistakes – if a problem is hard it will be hard for most people, then having a certification is not going to magically diminish that problem – but I have come across this attitude usually accompanied a comment along the lines of  “but you’re a CLA, this should be easy for you”! Arguably a nice position to be in in a funny sort of way though.
  • LabVIEW certification in itself can be a divisive – there are many people in the world who do not consider LabVIEW to be proper programming and almost think it’s like some sort of toy language and so can be very dismissive of the LabVIEW certifications themselves and by extension, your skills. I personally couldn’t disagree with this position more. Certainly LabVIEW is a far more abstracted language than say C++, and definitely has a far lower threshold at which you can become productive with it. In all other aspects though there really isn’t that much difference between developing in C, C++, Java etc and LabVIEW in my experience – they all have their quirks, nuances and pitfalls, but all require that you apply good development techniques to extract the best from them, and all will bite you if you don’t – LabVIEW in some ways more than most as it is very easy to write bad LabVIEW code that works and there are good and bad developers so I think it is wrong to consider it to be a lesser language.

Anyway – enough of my ramblings. back to the revision. Wish me luck.
Dave