Research

Automatic Testing: Battleships Anyone?

In the last article, we made the case for augmenting testing with automatic algorithms to complement human effort. We explored what automatic methods could do, looking only at the source code and looked at the problems they could solve for us. This article picks up where we left off, and explains how we can analyse programs and try to find inputs to achieve code coverage automatically.

A Simple Programming Language

To start with, we'll use the simplistic programming language from the previous article that demonstrates how we can analyse a program to transform the activity of generating inputs into a search problem. Our language has the familiar constructs: branching (if statements), relational operators (equality, greater than, less than), and a return statement. The language has variables too, the values of which might be set elsewhere (but can't change).

			Please enable JavaScript to view the examples.
		
Figure 1: An example program

The example has a small amount of behaviour. We control three inputs, a, b, and c and one of three things will happen depending on that input. If the three inputs are the same, we will reach the first return statement. Otherwise, we will reach one of the two statements depending on the value of the inputs.

This example has a few things wrong with it. Firstly, the implementation is wrong: triangles are misclassified. There also aren't enough checks to make sure that the side lengths make a triangle. However, we can't see these faults by looking at the code in isolation. We only know about the faults because we know what a triangle is.

If the code can't tell us if it's broken, what does it tell us? We can take two different 'views' of code. The first perspective is 'static'; we run a tool that analyses the contents of the source file and looks for problems. Static analysis tools such as PVS-Studio and Coverity can uncover potential bugs using detection rules and data flow analyses. The other perspective is 'dynamic'. Instead of reading the source code, we watch what the program does, analysing it 'dynamically'. These approaches are complementary: they are both adept at finding particular flaws.

Dynamic analysis is well suited to problems where the source code can't be analysed, or where the results can't be easily modelled (more on models later). In this example, we'll use dynamic analysis to cover 100% of the decision points in the program. We can enumerate the paths through the program statically by tracing each branch. While it's straight forward to get the list of paths, generating inputs that follow those paths is more difficult.

			Please enable JavaScript to view the examples.
		
Figure 2: The path to the goal
To begin, suppose we want to reach the statement . In order to do that, we'll need to find a path through the program that allows us to reach that statement. We can compute the path by collecting the branches taken to reach the target, shown in the box on the left.

At this stage, we have a target, and we know the path we need to take to reach it. However, at each decision point in the path, there is a test we must satisfy, that our input values allow us to advance. This is where search and computing power come into the fold.

Finding Inputs

At each step along the path, there is a condition that we must solve. We can't simply try to solve each in turn, as values that satisfy one condition might conflict with later conditions. There are two key ways we can try to solve these conditions.

Solvers

We can take the condition for the branch and analyse it symbolically. This works by taking each of the conditions from the branches, formulating it as a mathematical expression, and reasoning about it using, for example, an SMT solver like Z3. This is easy for simple, linear constraints, such as a == b, but becomes more difficult as the math gets harder and the constraints get larger. Pex (used in Visual Studio 2015's Smart Unit Tests) uses this approach and relies on a comprehensive set of "models" for various arithmetic, string, and IO operations that allow it to generate test cases for more complicated programs.

Search

Instead of modelling the conditions and solving the constraint analytically, we can take a "scientific" approach. We try inputs, and use a modified program that replies with information on how close we were to the branch we targeted. Armed with that information, and information about previous attempts, we can try new values that might be more likely to reach the target. This approach is simpler to implement, and has the advantage that it only requires a minimum of understanding about the code to generate input data. Evosuite, and to an extent, fuzzers such as AFL, use this approach.

The program needs to tell us how close each input we try is to our target. In the example, we can easily tell if we reached a point on the target path by stepping through the program. However, if we miss the goal, we don't have the feedback we need to make the next guess. Unless we do something more intelligent, we can only make random guesses.

We get the feedback we need by modifying the program so that we measure the proximity of our inputs based on the difference between the input and the value that satisfies each constraint. When this is maximised, we've reached our target. We've gone from playing battleship ("hit" versus "miss") to playing Marco Polo (the feedback tells us where the target is compared to where we are).

In the case of our program, we can transform constraints into functions of the input variables, then search for inputs that maximise these functions. However, we must be careful that our transformation of values produces the right shape and gives us that proximity measure.

Figure 3: Detailed and undetailed feedback (Click here to switch feedback)
The example shows two encodings of the constraint x == 10. If you toggle the function, you can see the difference between the two. One produces the maximum value (100) only when x is 10, whereas the other takes the absolute value of 10 - x and subtracts it from 100, giving a slope towards the best value. Sloped shapes are ideal for search algorithms, as they give them a clue where to try next. In contrast, a spike at one value is hard to find without relying on luck.

 

Hitting Targets

Figure 4: Building up a picture with random guesses
In the search process we start with no information: we have to make a completely random guess. We can repeatedly make guesses across the input space to get an idea for where the values that reach our target are. The animation shows how this works. After a few random guesses, the "shape" becomes apparent. Instead of guessing randomly, we can use the shape to make a more informed choice. It's important that we don't have to understand the process that generates this shape; we can still reach the target without understanding the relationship between inputs and outputs.

There are multiple algorithms we can use to try to find these inputs. In our example, we only have one input which simplifies the problem a great deal. To find the maximum, there are a few strategies we can employ. The simplest is a random search, as used in the example above. If we wait long enough, we'll probably find the answer. Instead of relying on serendipitously finding the answer, we can be more methodical.

Figure 5: A simple search algorithm (switch feedback)
The example in Figure 5 shows how the optimum can be reached. We start from a random point, then move in the direction that produces the highest increase in fitness. If you toggle the feedback function, you can see how a more intelligent approach is able to find the maximum value by moving in whichever direction that yields an increase from the previous guess.

As the naive function is flat with the exception of the one good value, the search will give up (unless it randomly chooses it as its starting point). In contrast, using the absolute value of the distance gives a slope that is easy to traverse.

Covering Branches

Now we have a way to extract the constraints from each branch, plus an approach to represent constraints to run a searching algorithm. The final step is to combine them. In cases where we have multiple branches, we want to find values that satisfy both equations simultaneously. In a search-based context this is simple: we just aggregate the individual functions.

In practice, more advanced searching algorithms are used that reduce the number of inputs tried. There are a variety of approaches that can be used to minimise the time wasted exploring useless inputs, such as pattern search, which increases the stride it takes as long as fitness values increase. In principle, however, these algorithms are very similar to one another; at the very least they share the same goal.

Summary

This article showed how program analysis allows us to establish targets and generate feedback we can use to search for inputs. The next article in this series will expand on search algorithms, showing how the approach described here can be taken into practice by generating tests for our program with a more sophisticated search algorithm.

Read More

Innovations in Healthcare Conference 2015

Location: The Octagon, The University of Sheffield, Western Bank, Sheffield, S10 2TN
Date: 13th July 2015 9:30pm – 4pm
#iihealth
@ShefHealthcare

Now in its 5th year, Innovations in Healthcare aims to build on previous successes, bringing together individuals from across the healthcare sector to see why The University of Sheffield is at the forefront of healthcare research and to discuss potential research collaborations.

Last year saw over 300 delegates register from 160 companies, both large and small, looking to learn more about research at The University of Sheffield through presentations, exhibitions and 1:1 networking, and we are delighted to say that we will be demonstrating at the event. Come down and see our exhibition!

To register click Here

Read More

The emergence of VR and interactive technologies

The emerging range of 3D virtual reality headsets such as the Oculus Rift, Google Cardboard, Samsung GearVR, HCT Vive VR and Sony’s Project Morpheus provide unprecedented levels of immersion and presence, in devices that are affordable and soon to be widely available to the public. Two of these devices, the Google Cardboard and Samsung GearVR are powered by a consumer mobile phone, and the remaining three are powered by external computers or games consoles with dedicated graphics cards.

Developing software applications for these devices presents a different set of challenges when compared to normal 3D applications. The graphics card processing requirements are higher, the perceived visual quality is lower, and due to the immersive nature of the devices, failure to maintain a high frame rate with low levels of latency not only interferes with the user’s

the user knows that it isn’t real, they know that they are wearing a headset, but at a low level their brain does not.
experience, but can also cause physiological side effects, such as motion sickness and dizziness. In addition to this, as physical movements within the real world are more closely mirrored in the virtual world, the physical constraints within both real and virtual worlds need to be considered for both user safety, and enhancement of the experience. This article is the first in a series of articles looking at development of and interactions within immersive virtual environments. Starting with the current limitations of the hardware, the article describes how to overcome some of these limitations, and how to get the most out of the current generation of devices.

A key factor in successful immersive virtual environments is the illusion of presence. When this illusion is achieved, the environment tricks the brain into thinking that what is being perceived is actually the real world. Now, the user knows that it isn’t real, they know that they are wearing a headset, but at a low level their brain does not. This means that psychological responses such as anxiety, fear and excitement can be induced in the virtual environment. We know from movies that all these emotional responses can also be induced by a 2D screen. However, in an immersive 3D environment, these responses can be significantly heightened, as the brain can more easily be convinced that what it is experiencing is real.

Low resolution – high performance

The new generation of virtual reality headsets work by having the screen very close to the user’s eye, using lenses to focus. As the visual display is so close to the eye and magnified, the effective resolution is very small when compared with modern monitors. This means that the image can appear to be pixelated, and blocky. This is commonly referred to as a screen-door effect, and it gives the feeling of looking at the scene through a semi-translucent barrier. Due to the immersive nature of the display, and the fact that the image displayed to the user needs to change quickly whenever they move their head, a very high frame rate is needed. The current Oculus Rift development kit recommends a frame rate of 75 frames per second (fps), which means that the graphics card has to re-draw the scene 75 times per second, per eye – so 150 times every second. Normal computer monitors can display at most 60 frames per second, so the demand on the graphics card will be at least 2.5 times greater. This frame-rate requirement is only going to get higher, with the upcoming HTC Vive headset reporting a 90 fps limit and Sony’s Morpheus 120 fps.

ramp-train-blocky

Even though the perceived resolution is quite low, the actual resolution is still quite high, and requires high performance graphics cards to maintain a high frame rate. As the resolution and refresh rates of the headsets become higher, so will the required GPU performance. This in itself poses a problem for the industry, as the most powerful commercial graphics cards may not yet be powerful enough to allow for high quality rendering of complex scenes at such a high frame rate. The top of the range cards will do a reasonable job, but will initially be prohibitively expensive for most people, and certainly more expensive than the VR headsets themselves. In order to reach the widest audience, application developers have to ensure that their application consistently runs as high a frame rate as possible, on the widest range of hardware, while at the same time providing a good experience.

Motion sickness

Virtual reality headsets are notorious for causing nausea in users. Whether a user feels sick or not is dependent on a number of factors, and some people are more susceptible than others. Put simply, this occurs when what a person sees does not match up with what their brain expects to see. For example, if you move your head to the side and what your eyes see does not change in the way it normally does – in the way that the brain expects it to – then this can very quickly lead to headaches and sickness. Other factors that can cause discomfort are poor resolution, unresponsiveness or jerkiness, low frame rates and juddering. Flashing lights, fast changing scenes and a in-game constraints preventing movement can also cause issues.

ramp-train-blur2

When developing games and other interactive applications that traditionally have cut-scenes, static menus and fixed images, it is important to take this into consideration, and provide the user with some suitable feedback when they move. For example, a menu could be displayed as a semi-transparent overlay onto a simple virtual room, with the menu fixed in place but the room moving in the background. Cut-sequences could be recorded as character animation sequences instead of static videos, so the user will still be able to look around as if they are a part of the scene, or have it displayed on a screen in front of them, so it will be like watching a TV within the virtual room.

Lateral movement

Although the new generation of VR headsets all provide a full 360 degree view based on head orientation, they are not all capable of detecting subtle lateral head movements. Without this, when you move your head side to side, or lean forward, the headset is unable to detect the movement and the displayed image remains the same. This not only breaks the illusion of presence, but it can also cause motion sickness. This is a key limitation of the mobile phone based solutions, such as Google Cardboard and Samsung GearVR. This issue has been tackled in different ways by the computer-driven solutions. The Oculus Rift has the ability to track the lateral movement of the user using an IR camera which is placed in front of the user and is synchronised with the headset to combine the rotational and lateral movement of the head. This is designed to detect subtle movements, and not to track the user moving around the room. Morpheus works in a similar way, using the Playstation camera to track both head orientation and movement. The HTC Vive stands out in this regard, as it is the only whole-room solution. Using wall-mounted sensors, it can track movement within a wide area.

Leaving the HTC Vive aside for the moment, The solution implemented in the Oculus Rift and Morpheus leads to another issue is the way that this movement is interpreted once inside a virtual environment. By moving your head to the side, it is possible to move the camera position in the 3D world outside the range of traditional in-game constraints. Although it is designed to compensate for small head movements, it can actually track quite a wide area, so if the user physically moves themselves to the side, the virtual camera will move accordingly. Depending on the type of application, this may or may not be a serious issue. If, for example, the user is flying through an open scene with no physical object constraints, then this will not be an issue. However, if the user’s character is physically constrained in some way, for example sitting in a chair, the range of possible motion will potentially move the camera so that the person is out of their seat. This could be made worse if it allowed them to move through a wall. Traditionally, the physical movement of a character is constrained by the movement controls, whereas the orientation and positional offset of the Oculus Rift headset is applied after, and only alters the view. What this means is, that the constraints that are placed on character movement also need to be enforced when the user moves their head. This may result in a break from the illusion of presence, if the users head moves in the real world, but is constrained within the virtual world.

This is particularly disturbing when the users view is positioned on a virtual avatar, so when the user looks down, they see a body and hands. When lateral movement is applied, the user will become completely disconnected from their body. One possible solution to this is instead of offsetting the camera position by the lateral movement, it is linked to a rotation of the hip, thereby offsetting the camera position implicitly (assuming a bone structure where the camera is fixed above the neck, and all bones are linked correctly). This will ensure that the camera remains fixed on the neck, without losing the benefits of lateral movement. It also means that the amount the head moves in the virtual environment can be partially constrained based on the skeletal kinematics of the character.

For devices which are not designed for wide range tracking, there is currently no built in way to distinguish between a small head movement and the user physically moving themselves. However, there are a few options that could be used to make this distinction. Video and depth camera tracking, such as that provided by the Microsoft Kinect, could be used to distinguish between simple head movement and physical movement of the user. The ability to detect this distinction could potentially be built into Sony’s Morpheus when it is released. Other alternatives such as the Sixense positional tracking controllers could also be used. The latter is potentially easier to integrate into a VR headset, and will in the long term provide the most complete range of motion, which could even be used to replace the built in position tracking which has limited range. Alternatives to this also include joint and positional tracking using real time motion capture suits, such as the PrioVR, or constraining physical movement using walking platforms such as the Virtuix Omni.

By extending this range in which a person’s movement is tracked, a new problem is introduced, which is how to stop the user from inadvertently colliding with the now hidden real world. By extending the amount that the virtual world can detect real movement, you now run the risk of the user moving too far in the real world and injuring themselves. When immersed in a virtual environment, you completely lose where you are in the real world, how close you are to the desk, the screen, and other obstacles in the room such as chairs and tables. This is not such an issue if you remain seated, but if you stand up and start walking around, it can become a problem. The HTC Vive provides in-game mechanisms for showing the walls or the room, but does require the room to be empty of obstacles, which may not be feasible for most people.

Simple optimisations

When developing any virtual environment, it is important get the right balance between quality and performance. As already mentioned, higher frame-rates will increase the realism and reduce the risk of nausea. Even with state-of-the-art graphics cards, this is not always achievable with complex scenes. Many of the scene partitioning and optimisation techniques that are used in modern 3D applications can still be used. However, in addition to standard scene optimisation techniques, there are additional steps that can be taken to help improve performance on virtual headsets. These optimisations were applied to our Virtual Forest simulation, to enable fast visualisation of a massive cloud point data set with over 350 million points.

Virtual Forest

Single-step scene culling. Scene culling refers to the removal of objects from the scene before the graphics card is told to render them to the screen. Ordinarily, the scene is culled from the perspective of the position and orientation of the virtual camera, removing objects that are outside the camera’s view frustum. However, for 3D headsets, there are 2 virtual cameras, 1 for each eye, at slightly different positions within the scene. This means that the scene is effectively rendered twice from a different position and that scene culling would normally be applied twice per render loop. Instead, by extending the view frustum used for culling and placing the starting point in-between and slightly behind the two view cameras, the objects that are not visible in either camera eye can be accurately culled in a single step.

View dependent render quality. Ordinarily, complex 3D scenes are broken down into collections of objects, where each object has multiple versions with different levels of detail. If an object is close to the virtual camera, then a high quality version of the object is used. If the object is further from the camera, then a lower quality version can be used without having a large impact on the visual quality perceived by the user. This reduces the complexity of the scene at any one time and can significantly increase the frame rate. When using a VR headset, the objects at the centre of the users vision is the area that the user is going to focus on most. Objects at the edge of the camera and especially the corners are going to be distorted and hidden due to the “fish-eye” distortion that is applied to compensate for the lenses. Therefore, in addition to reducing the quality of objects further from the camera, the quality objects in the edges can also be reduced. This means that only objects that are directly in front of the user need to be high quality, and all others can be reduced based not only on distance from the camera, but also distance from the centre of the current camera view. This optimisation can be calculated at the same time as the scene is culled, and areas on the periphery can be marked so that lower levels of detail are displayed.

Motion based adaptive degradation. In reality, when a person turns their head, they do not see the world as clearly and in as much detail as they do when their head is still. We can take advantage of this, and link the render quality to the speed in which the head is moving. If the user is turning their head quickly, the quality of the objects being rendered to the screen can be reduced, thereby increasing the frame rate for a smoother experience when turning. Depending on the type of scene being rendered, techniques such as level of detail and reduction in cloud point density can be used without the user noticing.

This combination of techniques will result in a higher frame rate, and smoother experience, without any perceived reduction in visual quality.

Conclusions

The technical landscape of virtual reality is changing rapidly, and new devices and ideas are emerging to completely change the way we think about and use 3D technology. These new technologies have applications in areas far more diverse than just games, and can provide new ways of working, communicating, interacting with people and understanding problems. These devices bring their own challenges, and we need to develop new sets of standards and best practices to keep up with the rapid changes in front of us.

Read More

Automatic Software Testing: Rise of the Machines?

Automatic software testing has been a long-standing dream of many industrial practitioners and researchers alike. However, the state of practice is behind these advances in most industries. This series of articles will delve into the latest research, showing where automatic search-based techniques can fit into existing testing processes. First, we’ll explain the motivation for making computers test (or check) software, and show where they can be complementary to human testers.

Motivation

Software testing is crucial. There is a fruitful supply of examples of failures caused by software that could have been prevented with better testing procedures. One of the canonical examples used in Software Engineering classes is Ariane 5’s maiden voyage. The component that triggered the failure was reused from Ariane 4, but was not tested for Ariane 5’s different flight path. Simulations were found to demonstrate the failure, unfortunately only after the launch and the rocket’s subsequent self-destruction.

Testing isn’t easy. Failures can manifest in complex interactions between various components. One of the aims of software testing is to provide assurance that these interactions won’t cause failures when the system is deployed. As creative as humans are, the number of cases to consider makes the game of guessing which interactions might erroneously fail significantly challenging.

In complex systems, even after failure, the exercise of finger-pointing is often tricky. In his “Normal Accidents” book, Charles Perrow deconstructs many failures of complex safety critical systems to show, worryingly, that systems are highly likely to fail as a result of complexity.

This almost makes building systems seem like a futile task, however, the show must go on, so to speak. Most of our actions aren’t risk free, but we still have to have some appetite for risk. Testing serves to reduce risks by building confidence that the product behaves as it should, and identifying defects when they are least troublesome (i.e. before the customer has seen them).

Unfortunately, budgets constrain the amount of testing that can go into managing a project’s risk. As a result, testing must be delicately prioritised to deliver the best risk reduction possible. This might involve forgoing, or reducing the depth of, tests for less critical components to ensure that significant failures are protected against, and that common functionality is available.

Creativity versus Methodology

Human creativity is a great way to look for bugs. Given enough time, a human has a chance of finding all the failure modes of a piece of software. Unlike a human, however, computers aren’t creative and don’t get tired or bored. Part of testing involves tedious repetition of processes over multiple configurations. These are tasks for which computers are well-suited, hence the rise of automation in the software QA industry, with the aim of increasing the time testers can spend creatively trying to locate failures.

As useful as automation is, the main barrier to adoption stems from the need for a human to tell the computer what to do. Scripting is a technical skill that many testers don’t need, so it’s often unreasonable to ask them to don a developer’s hat and write some code; code that they’ll undoubtedly have to maintain.

A large body of software testing research is directed at a problem that requires no human input. As we discussed previously, it’s important for testing to cover as much of the behaviour of the product as possible to reduce the risk of flaws. Unlike automation, uncovering inputs to programs (at the unit level) can be made automatic.

Coverage

In the academic software testing field, work is often evaluated on its ability to maximise “coverage”. When we talk about coverage, we often use it in specific terms, such as code coverage for test adequacy, but these are all proxies for an unattainable notion of the coverage of the behaviour of the program our test suite produces. In other words, it’s how confident we can be that the system is unlikely to exhibit faulty behaviour, because our test suite has (should) already allowed us to observe it.

With an appropriate method of measuring the coverage of a test suite, we can build a feedback loop in which an algorithm constructs some tests (checks), then tweaks them in response to the coverage they attain. This loop is fundamental to a lot of automated systematic testing tools, such as EvoSuite, KLEE, and Code Digger (a.k.a Pex) to name but a few.

Unlike humans, it’s feasible to ask machines to spend hours trying different inputs to reach different parts of the code. Combined with code analysis, testing tools can automatically find inputs that exercise the hard-to-reach corner cases in the program for a small investment in compute time.

if (a==b){
if (a==c){ return 'Scalene' }else{ return 'Isosceles' }
}else{ return 'Equilateral' }

To illustrate that difficulty, there are a couple of bugs in the example. The code should, if it were correct, classify a triangle given three of its sides, in the variables a, b, and c. The first bug is fairly easy to spot; a triangle with three equal sides (an equilateral triangle) is incorrectly classified as scalene. The second is caused by missing code; impossible triangles (such as 1, 2, 3) are not distinguished between physically possible triangles.

Finding Faults

Revisiting the notion of coverage, we can analyse the example above to estimate how good a given test set is. There are three logically different cases, so if we have a test case for each we can be sure we’ve tested all the behaviour of the program. This is the ideal application of computation. If these constraints were much more complicated, it would be unrealistic for a human to come up with every possible interaction. A list of them identified by automated testing would be a useful shortcut. The tester could then check each of the cases and decide if the outputs are right for the given inputs.

Going back to the bugs, however, there’s a bug that doesn’t arise because of the conditions already in the program. Impossible triangles (such as a straight line with sides 1,2,3) aren’t rejected. Even with total coverage of the branches in the program, without knowing what a triangle is, a computer is unlikely to come up with a test for it. This (often creative) application of knowledge and ability to ask questions in the form, “what should the program do?” instead of simply, “what does this program do?” are the things that are hard to automate. This is where the complementary relationship between human and computer arises for software testing.

Having dealt with the “whys” of using automated techniques, the next article will start to answer the “hows”, explaining how we can translate supplying input to programs into a problem a computer can try to solve.

Read More

Work Experience Student

All last week we were visited by a work experience student all the way from France.

During the course of his stay we discovered Alexi was excellent at creating 3D drawings, so therefore, we couldn’t pass up the opportunity to get a 3D drawing of our centre. Below is video showing the building on Google Earth, we think he’s done a pretty great job.

You may have to download the Google Earth plugin to see this video, you can download it from here

Read More

SME Research and Development Tax Credits

Thousands of UK companies are eligible for R&D Tax Credits but many are unaware of the scheme, think they don’t qualify, or simply don’t know how to do it.

The government scheme, believe it or not, has been running for over 10 years but of course the rules have changed over that time. The thinking behind this scheme is to encourage, through tax incentives, SMEs to invest in innovation to boost the UK’s economic advantage and growth through its ability to deliver innovative products.

R&D Tax Credits work by reducing your taxable profit and therefore reducing your corporation tax. Don’t worry if you have little tax or even no Corporation tax in the first place, or if you’re a start-up and you haven’t paid any corporation tax yet, R&D tax credits can be transferred into cash, providing help for your cash flow.

R&D Tax Credits work by reducing your taxable profit and therefore reducing your corporation tax.

So how much can I get? Let’s say, for example, for every £100,000 you spend on research and development up to £225,000 could be deducted when calculating your taxable profits (or relievable losses). The tax claim is 225% of the qualifying R&D expenditure, so as your company would have already accounted for £100,000, the balance of £125,000 would be an additional deduction from your taxable profit. Loss making companies can claim up to 14.5% .
The scheme is quite broad so research can even include some aspects of Software Development. Not all activities involved in development are claimable activity, but generally, if you are a start-up developing a product which involves issues with scalability, integration, algorithmic development and technical innovation (there’s the word) then these companies tend to qualify.

The two key criteria for being eligible for tax credits are “Innovation” and “Uncertainty”.

Common Misconceptions:

“We only do R&D for our clients not ourselves”

Many companies conduct research for partners as subcontractors and therefore believe it is only their clients that can claim for the R&D work. However, if you as a subcontractor are taking an element of risk, you can still be eligible for R&D Credits. If your company is taking a risk by innovating, improving or developing a process, product or service, then it can qualify for R&D Tax Credits.

“It’s not ground-breaking research”

A web developer that creates simple sites for their clients in WordPress will be unlikely to qualify for tax credits. However if your work has an element of complexity by, perhaps, integrating systems in a new and untested way, creating custom/bespoke software or developing algorithms or even incorporating augmented reality, you will be likely to qualify. It is not so much about new features and functions within software or the web but about the processes behind these features and functions. If there is technological uncertainty, it could potentially qualify.

“We haven’t done any research for two years”

Don’t worry. You can claim R&D tax credits for your last two accounting periods.

“We’ve received grant funding”

The amount you can claim back becomes limited once you have been in receipt of grant funding, but this doesn’t mean you won’t qualify. If you receive some or all of the costs of an R&D project only the unsubsidised costs may be claimed as R&D Tax Credit. This rule is stricter if the grant money comprises any form of State Aid, as defined by the EU, where a partial subsidy paid to an SME may disqualify the whole of otherwise qualifying expenditure from R&D Tax Credit.

So what can be claimed?

Costs must be ‘revenue’ in nature and not capital expenditure (there is a special Capital Allowances regime for R&D-related capital expenditure). Only specific types of expenditure can qualify for R&D:

  • Staffing costs of directors and employees directly associated with the R&D activity. Pure administrative activity and similar support staff will not be eligible but occasionally in some cases an apportionment of costs may be eligible.
  • An apportion of Software and/or consumable items (including water, fuel and power),where necessary.
  • Agency employees, or workers who are ‘externally provided’ and actively engaged on R&D. Claimable costs are based on 65% of the payment actually made.

Generally HMRC aim to deal with 95% of claims within 28 days. So it is always worth taking a look. There are many companies out there that specialise in retrieving tax benefit, so why not take a look?

Full details of the scheme can be found here

Read More