What is the best test method for measuring sanitizer concentration in food and beverage applications?

Why measure sanitizer concentration?

To deliver high-quality produce which is safe for consumption, food producers must wash dirt, bacteria and chemicals from the product. This is done using wash baths, drenchers, showers or other washing methods dosed with disinfectants. It is critical to maintain a sufficient concentration of sanitizer in the wash water to kill bacteria and keep the water safe. Without adequate disinfection, bacteria can survive in the wash water and contaminate a whole batch of produce. Similarly, overdosing poses concern as this would leave a high chemical residual on the product, potentially create unwanted disinfection by-products, and waste money and chemicals in the process. Therefore, it is vital to accurately measure the chemical concentration in the wash water and control the chemical dose. Evidence that the correct concentrations are being maintained is required throughout the production process so produce buyers can be confident that product is being controlled correctly and is safe for sale and general consumption.

There are many factors which should be considered when choosing the best test method for you, for more information about this, see our article: What to consider when choosing a sanitizer validation test?

On-line measurement and validation testing 

When monitoring sanitizer concentrations in wash water, there are two main types of measurement: continuous online measurement and periodic validation testing. A combination of the two is recommended for best control of sanitizer concentration.

Summary of test methods:

Disc

Visual colorimetric test

The basis of colorimetry is a colour reaction to a solution or tablet, caused by the presence of a particular sanitizer. The intensity of the colour correlates to the concentration of chemical in the solution. Visual test kits allow users to manually compare results from the chemical reaction to a set of colours that correspond to different concentrations. The user selects the closest colour and records the result manually. 

This is a basic, low-cost test method which has been used for many years but isn’t wholly suitable for food processing applications. Colorimetric tests are often easy to perform but are prone to error if the measurement, preparation and timing steps are not performed precisely. The method relies on operators correctly matching the sample to the visual reference chart, but this choice is not always clear. Wash water is often cloudy or coloured, with bubbles or floating particles, and each of these can affect the visual quality of the sample, and therefore affect which colour match is made. If the sample vessel becomes scratched or stained, or if the ambient lighting is poor, a colour match can be made which is not representative of the sample. All these factors influence the subjective choice of the reference colour. 

 

Depending on the visual test used, the gap in concentrations between reference colours can be quite large and means the result is not very accurate. In many visual test kits, the test range is not large enough, and therefore the user must perform a dilution with a calculation and record the result manually – all potential sources of error. The development of colour depends on how the test is performed, and therefore the accuracy of the test is highly reliant on the ability and will of the user. To avoid precautionary action, a user may seek a more positive result, which could lead to non-conformance of the procedure. 

Visual colorimetry tests are low-cost tests that are simple but often prone to error, and therefore do not provide accurate enough results required for this critical application.

Test Strips

These cheap tests might not be affected by suspended solids and bubbles like the visual tests, but water colour and ambient light can still make selecting the correct reference colour difficult. A colour comparison to a colour chart is still required so the tests are often inaccurate and subjective. 

Data is manually recorded by a user which introduces a potential error and could lead to potential contamination issues. Because the test strip colour evolves over time, the user can unintentionally read or record the wrong result. Without a locked digital log, data records can be lost, damaged or edited, which would leave holes in the audit procedure. 

Test strips are a basic, low cost, easy method for food production applications which provide a repeatable test result but provide no traceability for auditing.

 

Test Strip Reader

Automatic test strip readers remove the user subjectivity from test strip method. They provide an accurate result with good granularity; however, in some cases the test range means that a large dilution is required, which introduces potential error. Test strips are unaffected by turbidity or floating particles, but the water colour can impact the strip colour and thus the result. 

The test strip colour develops over time so the reading must be taken at the correct time. The result is displayed onscreen and depending on the model may be stored in a data log. Without this feature, test trip readers lack full traceability. 

Test strip readers have the potential to provide accurate and repeatable results, but still suffer the limitations of water colour and lack of traceability.

Drop Count Titration

Titration uses coloured indicators to identify the chemical concentration of a sample. An operator adds a volume of a known chemical to the prepared sample until all of one chemical gets used up and the indicator suddenly reacts with a distinct colour change. The volume of chemical added correlates to the concentration of the sample according to some multiplication factor. 

The multiple step test requires different chemicals to be administered in the correct doses at the right time. The test must not be performed too quickly or haphazardly, and the right number of drops must be counted, correctly multiplied and recorded by the operator. Each of these steps is an opportunity for error. The sample vessel typically becomes stained over time, and the visual cue can be affected by turbidity, suspended solids, sample colour and ambient light. Missing or miscounting one or two drops gives an inaccurate reading and the test precision is limited to the concentration for a single drop. 

Drop count methods use calculations and correction factors that are based on ratios between chemicals in solution. In some cases, the actual mix of chemicals in solution is different to expected – for example in food processing applications where active chemicals are used up and transformed by reacting with organic material. The correction factor assumptions are wrong, and the test method calculations can yield incorrect results.  

Drop count tests are inexpensive and common in many food processing factories, however they have high margins for error and thus are not very suitable for the application.

tablet icon

Photometry

A photometer instrument can be used to measure the same colour change as the visual test methods. The test resolution is much better, as the instrument uses a mathematical formula to correlate the intensity reading to the corresponding concentration. The device requires ‘zero-ing’ on a ‘blank’ (no chemical reaction) sample but it is still adversely affected by coloured and turbid water with bubbles or suspended solids, as well as scratches, smudges, stains and moisture on the vessel. 

Whilst they are often portable, lightweight instruments, the glass vessels or chemical reagents can pose a risk in a food production environment. The test range is limited by the chemistry and requires a large dilution factor, which introduces error. The chemistry must be performed correctly and with the correct timing to produce the correct colour. 

Depending on the model of the instrument, results may be stored digitally providing an audit trail; not all models have this feature and instead may rely on manual records. 

Photometry is a good quality method when performed correctly on suitable water samples, however this is often not the case in food production environments. The method can be performed incorrectly and therefore the result reliability can be questioned.

Laboratory Titration and Automatic Titration

The titration method uses carefully measured volumes of reagents and colour-changing indicators to measure the exact concentration of sanitizer in the sample.   

The method requires difficult chemicals which must be carefully stored and managed. The equipment and chemicals are not suitable for production environments. Skilled testers must be managed and trained as the process is complex. Manual titration or automatic titration using a machine must be performed in a climate-controlled laboratory by a trained scientist. Otherwise, environmental factors or user error can dramatically affect the result. 

There is not a standard titration method for every target chemical, and depending on the reaction and indicators, the measurement range may be limited. For certain chemicals used in food and beverage environments, standard titration techniques cannot measure the desired target directly, or achieve the desired range.  

Titration delivers a very accurate, repeatable results. However, because they cannot be performed by operators near the production line, they are done less frequently, it takes longer to control the dosing, and labour costs are high. Therefore whilst this method can be suitable for large food processing applications with scientists in a laboratory, the process is inefficient as a regular, quick test used to modify the process in real time.

homepage_why_sensor_icon

Single-Use Electrochemical Test

The electrochemical method uses a chemical reaction to generate an electric current that is proportional to the concentration of sanitizer in the sample. An electrochemical sensor microarray with an electronic instrument can measure that current using a fully automated test method. The simple and resilient method has no complex steps and can therefore be performed by users of any skill level, with minimal training required. Each batch of sensors comes pre-calibrated, so no user calibration is required. The simple process helps to reduce potential error sources and removes opportunities for human error by recording results directly into the instrument data log. 

The electrochemical method is not affected by sample colour or turbidity, floating particles, bubbles, or ambient light conditions, making it more suitable for food production. In addition, the instrument utilises no chemicals or glassware, making it suitable for a production environment. The measurement range is quite large, so direct measurement is possible with no dilution or a small dilution factor. 

The electrochemical test method gives quality, traceable results and is perfectly suited for the food production application environment.

Why should you consider using Kemio technology?

kemio_icon

Expertly designed for the food and beverage market, sensor technology offers significant benefits and cost efficiencies compared with alternative measurement techniques. The Kemio method currently includes disinfection tests for chlorine, chlorine dioxide, chlorite and peroxyacetic acid (PAA).

To find out more information visit: Why Kemio technology?

Contact our team to find out how we can support you