Showing posts with label detector. Show all posts
Showing posts with label detector. Show all posts

Friday, 29 May 2015

ATOMIC ABSORPTION SPECTROMETRY

INTRODUCTION

Atomic Absorption Spectrometry (AAS) is a technique which is used for the analysis of quantities of elements present in a sample by measuring the absorbed radiation by the chemical element of interest.
This is done by measuring the spectra produced when the sample is excited by radiation. The atoms absorb ultraviolet or visible light and get excited to higher energy levels. Atomic absorption technique measures the amount of energy in the form of photons of light that are absorbed by the sample.
A detector measures the wavelengths of light transmitted by the sample, and compares them to the wavelengths which originally passed through the sample. A signal processor then integrates the changes in wavelength absorbed, which appear in the readout as peaks of energy absorption at discrete wavelengths.

The energy required for an electron to leave an atom is known as ionization energy and is specific to each and every element. When an electron moves from one energy level to another within the atom, a photon is emitted with energy E. Atoms of an element emit a characteristic spectral line. Every atom has its own distinct pattern of wavelengths at which it will absorb energy, due to the unique configuration of electrons in its outer shell.
This enables the qualitative analysis of a sample. The concentration is calculated based on the Beer-Lambert law. Absorbance is directly proportional to the concentration of the analyte absorbed for the existing set of conditions. The concentration is usually determined from a calibration curve, obtained using standards of known concentration or certified reference materials (CRMs). However, applying the Beer-Lambert law directly in AAS is difficult due to:
·       variations in atomization efficiency from the sample matrix non-uniformity of concentration and path length of analyte atoms (in graphite furnace AA).

The chemical methods used are based on matter interactions, i.e. chemical reactions. For a long period of time these methods were essentially empirical, involving, in most cases, great experimental skills. In analytical chemistry, AAS is a technique used mostly for determining the concentration of a particular metal element within a sample. AAS can be used to analyse the concentration of over 62 different metals in a solution. Typically, the technique makes use of a flame to atomize the sample, but other atomizers, such as a graphite furnace, are also used. Three steps are involved in turning a liquid sample into an atomic gas:

1. Desolvation – the liquid solvent is evaporated, and the dry sample remains;
2. Vaporization – the solid sample vaporizes to a gas; and
3. Volatilization – the compounds that compose the sample are broken into free atoms.

To measure how much of a given element is present in a sample, first of all , we must establish a basis for comparison using certified reference materials or known quantities of that element to produce a calibration curve.
To generate this curve, a specific wavelength is selected, and the detector (Usually Photomultiplier tube detectors are used) is set to measure only the energy transmitted at that wavelength. As the concentration of the target atom in the sample increases, the absorption will also increase proportionally.

A series of samples containing known concentrations of the element to be measured are analysed, and the corresponding absorbance, which is the inverse percentage of light transmitted, is recorded.

The measured absorption at each concentration is then plotted, so that a straight line can then be drawn between the resulting points. From this line, the concentration of the substance under investigation is extrapolated from the substance’s absorbance. The use of special light sources and the selection of specific wavelengths allow for the quantitative determination of individual components in a multi-element mixture.

BASIC PRINCIPLE

The selectivity in AAS is very important, since each element has a different set of energy levels and gives rise to very narrow absorption lines. Hence, the selection of the monochromator is vital to obtain a linear calibration curve (Beers' Law), the bandwidth of the absorbing species must be broader than that of the light source; which is difficult to achieve with ordinary monochromators. The monochromator is a very important part of an AA spectrometer because it is used to separate the thousands of lines generated by all of the elements in a sample.

Without a good monochromator, detection limits are severely compromised. A monochromator is used to select the specific wavelength of light that is absorbed by the sample and to exclude other wavelengths. The selection of the specific wavelength of light allows for the determination of the specific element of interest when it is in the presence of other elements. The light selected by the monochromator is directed onto a detector,typically a photomultiplier tube, whose function is to convert the light signal into an electrical signal proportional to the light intensity. The challenge of requiring the bandwidth of the absorbing species to be broader than that of the light source is solved with radiation sources with very narrow lines.

The study of trace metals in wet and dry precipitation has increased in recent decades because trace metals have adverse environmental and human health effects. Some metals, such as Pb, Cd and Hg, accumulate in the biosphere and can be toxic to living systems.
Anthropogenic activities have substantially increased trace metal concentrations in the atmosphere. In addition, acid precipitation promotes the dissolution of many trace metals, which enhances their bioavailability. In recent decades, heavy metal concentrations have increased not only in the atmosphere but also in pluvial precipitation. Metals, such as Pb, Cd, As, and Hg, are known to accumulate in the biosphere and to be dangerous for living organisms, even at very low levels. Many human activities play a major role in global and regional trace element budgets. Additionally, when present above certain concentration levels, trace metals are potentially toxic to marine and terrestrial life. Thus, biogeochemical
perturbations are a matter of crucial interest in science.

The atmospheric input of metals exhibits strong temporal and spatial variability due to short atmospheric residence times and meteorological factors. As in oceanic chemistry, the impact of trace metals in atmospheric deposition cannot be determined from a simple consideration of global mass balance; rather, accurate data on net air or sea fluxes for specific regions are needed.

Particles in urban areas represent one of the most significant atmospheric pollution problems, and are responsible for decreased visibility and other effects on public health, particularly when their aerodynamic diameters are smaller than 10 μm, because these small particles can penetrate deep into the human respiratory tract. There have been many studies measuring concentrations of toxic metals such as Ag, As, Cd, Cr, Cu, Hg, Ni, Pb in rainwater and their deposition into surface waters and on soils. Natural sources of aerosols include terrestrial dust, marine aerosols, volcanic emissions and forest fires. Anthropogenic particles, on the other hand, are created by industrial processes, fossil fuel combustion, automobile mufflers, worn engine parts, and corrosion of metallic parts. The presence of metals in atmospheric particles are directly associated with health risks of these metals. Anthropogenic sources have substantially increased trace metal concentrations in atmospheric deposition.

The instrument used for atomic absorption spectrometry can have either of two atomizers. One attachment is a flame burner, which uses acetylene and air fuels. The second attachment consists of a graphite furnace that is used for trace metal analysis. Figure 1 depicts a diagram of an atomic absorption spectrometer.



Fig. 1. The spectral, or wavelength, range captures the dispersion of the grating across the linear array.
           
 Flame and furnace spectroscopy has been used for years for the analysis of metals. Today these procedures are used more than ever in materials and environmental applications. This is due to the need for lower detection limits and for trace analysis in a wide range ofsamples. Because of the scientific advances of Inductively Coupled Plasma Optical Emission Spectroscopy (ICP-OES), Inductively Coupled Plasma Mass Spectrometry (ICP-MS), have left Atomic Absorption (AA) behind. This technique, however, is excellent and has a larger specificity that ICP does not have.

Figure 2 shows a diagram of an atomic absorption spectrometer with a graphite furnace.


 AAS is a reliable chemical technique to analyze almost any type of material. This post describes the basic principles of atomic absorption spectroscopy in the analysis of trace metals, such as Ag, As, Cd, Cr, Cu, and Hg, in environmental samples. For example, the study of trace metals in wet and dry precipitation has increased in recent decades because trace metals have adverse environmental and human health effects. Anthropogenic activities have substantially increased trace metal concentrations in the atmosphere. In recent decades, heavy metal concentrations have increased not only in the atmosphere but also in pluvial precipitation.
Many human activities play a major role in global and regional trace element budgets. Additionally, when present above certain concentration levels, trace metals are potentially toxic to marine and terrestrial life. Thus, biogeochemical perturbations are a matter of crucial interest in science.
The atmospheric input of metals exhibits strong temporal and spatial variability due to short atmospheric residence times and meteorological factors. As in oceanic chemistry, the impact of trace metals in atmospheric deposition cannot be determined from a simple consideration of global mass balance; rather, accurate data on net air or sea fluxes for specific regions are needed.

Particles in urban areas represent one of the most significant atmospheric pollution
problems, and are responsible for decreased visibility and other effects on public health, particularly when their aerodynamic diameters are smaller than 10 μm, because these small particles can penetrate deep into the human respiratory tract. There have been many studies measuring concentrations of toxic metals such as Ag, As, Cd, Cr, Cu, Hg, Ni, Pb in rainwater and their deposition into surface waters and on soils. Natural sources of aerosols include terrestrial dust, marine aerosols, volcanic emissions and forest fires. Anthropogenic particles, on the other hand, are created by industrial processes, fossil fuel combustion, automobile mufflers, worn engine parts, and corrosion of metallic parts. The presence of metals in atmospheric particles and the associated health risks of these metals.

Anthropogenic sources have substantially increased trace metal concentrations in
atmospheric deposition. In addition, acid precipitation favors the dissolution of many trace metals, which enhances their bioavailability. Trace metals from the atmosphere are deposited by rain, snow and dry fallout. The predominant processes of deposition by rain are rainout and washout (scavenging). Generally, in over 80 % of wet precipitation, heavy metals are dissolved in rainwater and can thus reach and be taken up by the vegetation blanket and soils. Light of a specific wavelength, selected appropriately for the element being analyzed, is given off when the metal is ionized in the flame; the absorption of this light by the element of interest is proportional to the concentration of that element.

Quantification is achieved by preparing standards of the element.
  • AAS intrinsically more sensitive than Atomic Emission Spectrometry (AES)
  • Similar atomization techniques to AES
  • Addition of radiation source
  • High temperature for atomization necessary
  • Flame and electrothermal atomization
  • Very high temperature for excitation not necessary; generally no plasma/arc/spark in AAS

We will discuss the Flame AAS technique and AAS with Graphite Furnace (GFAA) in the upcoming posts.

Thursday, 26 February 2015

CALIBRATION AND THE ROLE OF CALIBRATION SAMPLES IN METAL OPTICAL EMISSION SPECTROMETER.

THEORY OF CALIBRATION:

Concentration vs Intensity calibration curve
Calibration comprises measurement of calibration samples and determination of the functional relationship between the intensity ‘I’ of the line of an analyte and its concentration c in these samples. The functional relationship is the calibration function or calibration curve. It includes relationships between vaporisation, excitation, radiation offtake, dispersion and the measured value. Since spectrochemical analysis is a process of analysis is a process of analysis by comparison ( in contrast to absolute methods such as weighing ), it is necessary to carry out calibration with samples of accurately known concentration, the calibration samples.

The calibration function must not be confused with the function inverse to it-the read out or evaluation function. In the case of the calibration function I = f1 (c), the concentrations of the calibration samples are assumed to be free of error, and the errors (deviations from a best fit curve after correction of the intensities for systematic errors) are imputed entirely to the spectrometer method, so that the preconditions for regression calculations showing correlation coefficients as a quality index are useless. With the evaluation function c = f2 = ( I ) the concentration c of an analyte in an analytical sample is determined, which is accordingly subject to error, f2 = 1/f1.

For optical emission spectrometry there is no theory of calibration curves which can be used for practical purposes. There are formulae for which it is assumed that it is possible to represent the relationship between line intensity and concentration as a power function : I = I0 ck. The calibration function can be represented mathematically in various ways :

linear calibration function : I = f(c) = a0 + a1 c
non-linear calibration function : I =f(c) = a0 + a1 c +a2 c2+...+an cn

The extent to which the regression approaches the true course of the calibration
curve can be discerned from the residual scatter, namely at the point when the
addition of further terms to the approximation function does not produce any
further improvement in the residual scatter.

CALIBRATION SAMPLES


Fundamental role of the calibration samples is attested by international community and by International Standardisation Organization (ISO), which delivered the following definitions :
Reference Materials (RM) : they are Materials or substances whose properties are so well defined that they can be used to calibrate the instrument, verify the measure or assign values to the materials.

CRM sample with Spark analysis spots
Certified Reference Materials (CRM) : they are Materials whose values concerning one or more properties are certified by means of a valid technical procedure and equipped by a Certificate or other documents from a qualified technical Body ( public or private Organization or Society., which deliver a certificate for the Reference Material )

Calibration samples present three disadvantages :
1) They are expensive
2) Their dimensions and shapes are not always available for the sample-holder stand of the spectrometer.
3) They are available only for some elements and concentrations

In some cases calibration samples can be synthesised, for example by alloying or diluting part of a charge. Because of this manipulation, the calculated values are rarely reliable and their composition should be confirmed by chemical analysis.

RECALIBRATION SAMPLES

When calibrating spectrometers with calibration samples (reference samples)
Recalibration samples are measured a number of times in order to obtain a reliable nominal value suitable for calibration. The additive and/or multiplicative changes in the sensitivity of the spectrometer bring about displacements of the calibration curves in the linear scale of the co-ordinate system. In order to trace (calculate) the actual intensity values at any later time back to the nominal intensity values submitted at the time of calibration a low (LP) and a high (HP) intensity is required for each analyte channel. In metal analysis with spark discharge the low points of all the analyte channels are usually measured with the pure base (Fe, Al, Cu,...). The high points are usually measured from synthetic samples having as many elements as possible with good homogeneity and precision.

The synthetic composition is given as a guide analysis and the samples often do not lie on the calibration curves. Mathematical procedure of calibration is a automated process.
In emission spectrometry recalibration samples run out, because of the polishing of the surface before recalibration. When recalibration samples are replaced there is no guarantee that, even with the same sample number, the new sample concentrations will correspond exactly to the sample being replaced. For this reason when calibrating a spectrometer for metal analysis, a minimum supply of recalibration samples should be available, for example five recalibration samples for each type.

The frequency of recalibration depends on the instrument and its use.
Interdependence with the instrument means that devices of the same kind, specially because of different phototubes stability, must be recalibrated at different intervals. Interdependence with use means that, even if stability is the same, recalibration frequency depends on the kind of analysis (traces analysis, sorting analysis).

(Note: The above post is written in context to calibration of Spark Optical emission spectrometer for metal and alloy analysis.)


Tuesday, 24 February 2015

DIFFERENCE BETWEEN PHOTOMULTIPLIER (PMT) & CHARGED COUPLED DEVICE (CCD) DETECTORS

Some of the most common differences between Photo multiplier (PMT) detectors & Charged Coupled Device detector (CCDs)..


1.       Photomultiplier tubes (PMTs) and Charged coupled devices (CCDs) both give spectra. The difference is the PMT is used with a small slit in front of it to control the bandwidth of light being detected. The CCD takes advantage of the dispersed light fully. The pixel columns will each correspond to a wavelength (resolution and range depend on the grating used). A PMT requires scanning of the Monochromator to collect a spectra. The CCD takes a single snap shot and you have a spectrum. The CCD sensitivity and dynamic range is lower than a PMT.


2.     A photomultiplier tube is a detection device that is made from a glass vacuum tube with a series of metal plate electrodes. A CCD is a solid state detector made from semiconductor materials.


3.     The main difference is one of sensitivity. Generally speaking the better the spectral resolution of the instrument the lower the amount of light reaching the detector and so you need more sensitivity in your detector. A PMT measures a single point in the spectrum at a time whereas with a CCD the complete spectrum is imaged across the CCD and so can be measured all at the same time. 


4.     An instrument with a CCD is usually much faster and cheaper but will not have as good a spectral resolution (the ability to resolve absorbance peaks very close to each other).


5.     CCDs and photomultipliers vary in a number of aspects. One difference is gain, a photomultiplier has gain whereas a CCD does not (hence the multiplier bit of PMT). The PMT gain may be up to 10,000,000 and is available at high speeds and for large area detectors, which means that one can usually get close to the theoretical noise floor. On the other hand, PMTs have poor quantum efficiency compared to CCDs (25% typ against 85% typ) so you can sometimes get better performance with a CCD if you can go slowly enough.


6.     PMTs are also typically single channel devices, although 16 channel linear arrays are available. CCDs are usually linear or 2D arrays.


7.     In a dispersive spectrometer a linear CCD array can capture the entire spectrum in one measurement. A single channel PMT must have the spectrum scanned across it sequentially to produce the entire spectrum.


8.     PMT's are typically preferable to CCD's on spectroscopic application for several reasons. The ability to adjust the gain of each PMT allows a manufacturer to adjust the response of each PMT to the specific signal being measured, so every element you are trying to detect can be analyzed at optimum conditions. Solid state CCD's are a compromise. Every element detected has the same conditions, so most are compromised. 


9.     Also, PMT's can be heated and held at constant temperature (in well made instruments) to prevent drift caused by variation in temperature. If you try to heat a CCD, the noise level will go up, and the signal to noie ratio will degrade as a result. CCD's are sometimes cooled to try to improve their s/n ratio, but usually not cooled enough to really help much due to condensation issues that arise. 


10. A third advantage of PMT's is that they can be used in a vacuum chamber without long term degradation for decades of use. The surface of a CCD will degrade under vacuum over a few (8-15) years. Most manufacturers making CCD based instruments opt for a Nitrogen or Argon flush, rather than vacuum to displace the oxygen from the detector chamber. This method results in decreased performance compared to PMT's, and is used in lower performance less expensive spectrometers.

Monday, 9 February 2015

LASER INDUCED BREAKDOWN SPECTROSCOPY


LIBS, is a spectroscopy technique in which a short laser pulse beam is focused on a target sample. Laser energy ionizes the sample material by heating it,  creating small area of plasma. Excited ions in the plasma state emits light waves which are collected and the spectrum is resolved by a spectrometer and analyzed by suitably calibrated  photon or light detector. Each chemical element has a unique wavelength or signature which can be optically resolved from the obtained spectrum. As  result, the composition of the elements which constitutes in the target sample can be determined. Below provided some of the general information about the technique :

i Advantages
ii Considerations
  • Spectral coverage vs. resolution
  • Light sensitivity
iii. General Applications


I. Advantages


LIBS is considered one of the most  efficient and user friendly analytical techniques for trace elemental analysis in gases, solids, and liquids. Some of its major advantages include:
  • Real-time measurements: online monitoring and quality control of industrial processes
  • Noninvasive, nondestructive technique: valuable samples can be reused, sensitive materials can be analyzed, suitable for in-situ biological analysis
  • Remote measurements can be done from up to 50 meters distance: can be used in hazardous environments and for space exploration missions on other planets
  • Compact and inexpensive equipment: can be widely used in industrial environments, perfect for field measurements
  • High-spatial resolution: can obtain 2D chemical and mechanical profiles of virtually any solid material with up to 1 µm precision
  • Non or very little sample preparation is required: reduced measurement time, greater convenience, less opportunity for sample contamination
  • Samples can be in virtually any form: gas, liquid, or solids
  • Analysis can be performed with a very small amount of sample (nanograms): very useful in chemistry for characterization of new chemicals and in material science for characterization of new composite materials or nanostructures
  • Virtually any chemical element can be analyzed, such as heavier elements unavailable for X-ray fluorescence
  • Analysis can be done on extremely hard materials like ceramics and superconductors; these materials are difficult to dissolve or sample to perform other types of analysis
  • In aerosols both particle size and chemical composition can be analyzed simultaneously
II. Considerations
  •  Spectral Coverage vs. Resolution

Compact echelle spectrometers designed for LIBS applications are offered by several manufacturers.
In the rare occasion that an application requires even higher resolution, the Acton Series of spectrometers with their long focal lengths are extremely useful. The latest models  use toroid mirrors with improved spectral quality.
For a  detector with 1024 horizontal pixels, each of which is 26 m wide, the theoretical field of view is 26.6 mm. But since a standard 25 mm intensifier is used, the field of view is 25 mm.
For example, if you decided to utilize a 2400 groove/mm grating in the Acton Series 2500 in order to enhance resolution, the linear dispersion will be 0.6 nm/mm while the spectral coverage will be 0.6*25 = 15 nm. To cover a spectral range between 300 and 600 nm (for example), you will need to perform at least 20 laser shots each time, moving the spectrometer grating to a new position and "gluing" all 20 spectra together. This is a very standard procedure which can be done painlessly and automatically
.
The only disadvantage to this is that acquisition of one spectrum could take up to a few dozen seconds or longer, which is why the echelle spectrometer has become extremely popular, especially in industrial and field applications where real-time measurements such as online quality control is a must.

  •  Light sensitivity

Typically, the laser pulse in LIBS applications lasts for femto- to nanoseconds (10-15 to 10-9 s). Especially in applications where non-invasive and non-destructive analysis is required, a relatively small amount of laser energy is transferred to the sample. Therefore, one laser pulse produces a weak emission signal which is hard or impossible to collect with conventional CCD detectors. That is why intensified CCDs (ICCDs) are widely used in LIBS.
To improve the emitting signal on the order of 10-30 times, a scheme with two orthogonal lasers beams is often used. In this dual-scheme, the first and usually more powerful laser pulse ablates and atomizes sample material while the second one heats the ablated material even further, allowing it to improve the intensity of atomic or ionic lines. Factors such as the level of laser excitation energy for both pulses and the time delay between the pulses play a crucial role in achieving signal intensity enhancement. This technique increases the sensitivity of LIBS by  at least one order of magnitude and allows for a greater possible number of applications.
If measurement time duration is not an issue, a regular CCD, (1024x1024 pixels, 13 µm pixel size), can be used together with the an spectrometer for LIBS applications. To obtain the reasonable light level required for a non-intensified CCD, long exposure time measurements should be performed. In this case, plasma emission signal is accumulated on the CCD from a multiple laser pulse. However, one should be careful about excessive accumulation of background noise and low signal-to-noise ratio. It is especially important when performing measurements in the open air without an enclosed sample chamber. Since the CCD stays open for a long period of time, all sources of stray light in the room should be eliminated and measurements should be conducted in darkness.  CCD usually proves a more sophisticated system than the  ICCD because intensified CCDs are prone to permanent damage by excessive light levels. Extra care should be taken so as not to expose ICCDs to the bright sources of light like laser reflections. In the case of a regular CCD, it is difficult to damage with excessive light.

III. General Applications

The fact that LIBS generally requires little-to-no sample preparation, simple instrumentation, and can easily be performed on-the-field in hazardous industrial environments in real-time, it is a very attractive analytical tool. The following are a few examples of real life applications, where LIBS is successfully used:
  • Express-analysis of soils and minerals (geology, mining, construction)
  • Exploration of planets (such as projects using LIBS for analyzing specific conditions on Mars and Venus to understand their elemental composition)
  • Environmental monitoring (Real-time analysis of air and water quality, control of industrial sewage and exhaust gas emissions)
  • Biological samples (non-invasive analysis of human hair and teeth for metal poisoning, cancer tissue diagnosis, bacteria type detection, detection of bio-aerosols and bio-hazards, anthrax, airborne infectious disease, viruses, sources of allergy, fungal spores, pollen). Replacing antibody, cultural, and DNA types of analysis
  • Archeology (analysis of artifacts restoration quality)
  • Architecture (quality control of stone buildings and glasses restoration)
  • Army and Defense (detection of biological weapons, explosives, backpack-based detection systems for homeland security)
  • Forensic (gun shooter detection)
  • Combustion processes (analysis of intermediate combustion agents, combustion products, furnace gases control, control of unburned ashes)
  • Metal industry (in-situ metal melting control, control of steel sheets quality, 2D mapping of Al alloys)
  • Nuclear industry (detection of cerium in U-matrix, radioactive waste disposal)

Tuesday, 20 May 2014

WEARING METALS AND THEIR RESPECTIVE PARTS IN DIESEL LOCOMOTIVE

INTRODUCTION

Lube oil analysis of Diesel locomotive by using several  analytical techniques for the condition monitoring and monetizing the engine is very important for the long life of the engine. The analytical techniques involves techniques such as elemental analysis by RDE-AES or ICP-AES , Fourier transform infrared spectroscopy, Viscosity measurement , Particle counting wear debris analysis and Karl Fischer moisture. However we had discussed about some of the above analytical techniques in earlier posts and rest of the techniques we will discuss in upcoming post. In this post we are going to discuss about the wear metals and their respective affected parts.

Elemental analysis is the most basic tests for the lube oil analysis, it is used to determine the presence of wear metals in diesel oil locomotives. There are two types of instruments which are generally used for the elemental analysis of lube oils , "RDE-AES & ICP-AES" ,which can detect more than 20 elements in lube oil. We will discuss in detail about the differences between both the instrument in the upcoming posts, however in the mean time they serve the same purpose of determination of wear metals in PPM level inside the lube oil .

During machinery in working , wear metal debris particles are produced by rubbing motion of mechanical component parts , are either normal wear or abnormal wear, these wearing metals can be detected using spectroscopy . The wear metals indicate their respective sources i.e engine parts. For every diesel engines certain limits are set for respective metals in ppm ,above which failure may occur because of the higher rate of wearing. So using spectroscopy it is much easier to monitor the condition and can take appropriate action before it will be too late and can save from bigger loss.


BELOW IS THE LIST OF THE WEAR METALS AND THEIR RESPECTIVE SOURCES:

WEAR METAL

SOURCES
Aluminium
Piston, inappropriate filtrations, Crankcases on Reciprocating Engines, Bearing Surfaces, Pumps, Thrust Washers.

Copper
Bushing, Thrust Plates.

Silicon
Inappropriate air filtrations.

Iron
Bushing, Shaft, Ring.

Chromium
Cylinder liner, Exhaust Valves.

Tin
Main bearing, Con rod, TSC bearing.

Lead
Con road, TSC bearing, Seals, Solder, Grease.

Sodium

Water coolant leakage into oil.
Boron

Water coolant Leakage.
Magnesium

Oil additives.
Nickel

Alloy from bearing metals.
Molybdenum

Piston rings.
Phosphorous

Anti-wear additive.
Potassium

Coolant Leak, Airborne Contaminant.
Silver

Bearing cages ( Silver Plating ).
Zinc

Anti-wear additive.
Calcium

Detergent Dispersant Additive.
Barium

Synthetic Oil Additive Synthetic Fluid.



Thursday, 2 January 2014

BREATH ALCOHOL ANALYSER

INTRODUCTION

Many humans are addicted to the psychoactive effects of alcohol thus, it is the most common, legal drug of choice. However, the influence of alcohol or the over-consumption of alcoholic drinks by humans is often the cause of crimes and violence, including fatal traffic accidents. Traffic deaths rank highest among all causes of death & alcohol related traffic fatalities rank highest within this category. Safety Agencies are challenged to locate intoxicated drivers and to remove them from the public roadways.
The analysis of alcohol in breath was considered a very desired and objective test specimen for determination of a vehicle operator’s breath-alcohol concentration and impairment level for evidential purposes. In the early 1950s, first Breathalyzer set the basis for the scientific acceptance of analyzing alcohol in breath. Law-Enforcement personnel implemented and administered these noninvasive and efficient tests as part of their drunk-driving enforcement.

THE TECHNOLOGY OF BREATH-ALCOHOL TESTING

The technology of breath-alcohol testing has changed fundamentally over the years. This was partially driven by general technology advancements and in part due to defense challenges. The following techniques describing the most recognized technologies used for preliminary (“screening”) and evidentiary breath-alcohol analysis as well as its advantages and disadvantages:

1. Wet-chemical Oxidation technology: 

The analytical principle was based on chemical oxidation by alcohol within a mixture of dichromate and sulfuric acid in vials. It paved the way for scientific acceptance of evidential breath alcohol testing by the international forensic community and the courts.

Advantages:
  • Compact table-top package.
  • Relatively quick analysis.
  • Accurate and specific to alcohol.


Disadvantage:
  • Minimum required breath volume < 60mL.
  • The handling of the vials is critical as they contain sulfuric acid.
  • The Breathalyzer’s biggest short-coming however, was the fact that the system was operator dependent.
  • Growing legal attacks in the eighties were vulnerable to manipulation by the operator thus; the equipment was rapidly replaced by newer and less operator dependent instruments.
 2Solid-state sensor technology:

Commonly called “Taguchi” cells, a metal oxide semiconductor based sensor manufactured by Figaro located in Japan. The Taguchi cell operates by adsorption of gas molecules on the surface of a semi-conductor. This transfers electrons due to the differing energy levels of the gas molecules on the semi-conductor’s surface. These types of instruments are sold mainly to the consumer markets as opposed to law enforcement. None of these sensor-type instruments are approved by the National Highway Safety Administration as evidential breath testers.

Advantages:
  • The sensors are small in size and rather inexpensive to manufacture. Lowest priced breath testers.
  • These instruments are sold in convenience stores and mail-order-catalogues.
Disadvantages:
  •  The sensor is very unstable, drifty and non-specific to alcohol.
  •  It reads all hydrocarbons (organic vapors) and will habitually produce false positive alcohol readings caused by smoker’s and car exhaust CO as well as many other environmental vapors and gases.
  • This senor is partial pressure sensitive and therefore changes sensitivity with change in altitude and elevation.
  • This sensor is sensitive to changes in ambient temperature, humidity and breath flow patterns.
  • For these and other reasons, solid-state sensor instruments can’t be employed in evidential and legal applications.
3. Electro-chemical cell technology (“EC”):

Most commonly called “fuel-cell”. Fuel-cell technology for alcohol analysis was first introduced in the early 1970s by an Austrian researcher. The EC sensor requires a sampling system consisting of a piston or bellow pump assembly, applying a very precise amount (~ 1 ccm) of breath to the sensor. The volume consistency is highly important because the current produced by the sensor is proportional to the total number of alcohol molecules converted in the sensor.
 The sensor is composed of an immobilized electrolyte, flanked by an active and a passive electrode. The electrolyte and the electrode material are selected such that the alcohol to be measured is electrochemically oxidized and converted at the active electrode. The change in the electronic conductivity causes a rise in current flowing from the active to the passive electrode. The total electrochemical reaction is evaluated by time integration of the sensor’s current. This sensor’s life expectancy is approximately 4-5 years.

Advantages:
  • The sensor is highly specific to alcohol.
  • The measurement cannot be biased or influenced by endogenous substances such as acetone (diabetics and starving people), CO or Toluene.
  • The sensor is highly sensitive, down to 0.1 ppm.
  • Accuracy meets specifications for evidential instruments (NHTSA) and remains stable ≥ 6 months before having to calibrate it again.
  • Its expected life term is approximately 5 years.
 Disadvantages:
  • EC based instruments cannot observe the breath alcohol concentration throughout the subject’s exhalation . This doesn’t allow detection of alveolar breath (“deep lung air”), mouth alcohol, belching, burping, Gastro Esophageal Reflux Disease (GERD) and residual alcohol trapped under dentures or alcohol from bleeding gums.
  • The EC sensor is cross sensitive to other alcohols such as methanol and isopropanol.The EC sensor’s output is temperature dependent and suffers short term fatigue if the sensor is exposed to a series of successive alcohol containing tests.
  • EC based instruments are not accepted for evidential use in many countries, states and jurisdictions.
4. Infrared Spectroscopy (“IR”):

IR technology (IR Spectra-photometry) based breath-alcohol testers were first introduced in the mid-1970s. IR instruments have become the standard worldwide for legal, evidential breath analysis.
The analytical concept is based on the Beer-Lambert Law of physics, the “Law of absorption”. It addresses the linear relationship between absorbance and concentration of an absorber of  electromagnetic radiation. Alcohol vapor introduced into an absorption chamber will absorb some of that IR radiation transmitted through the chamber. The amount of IR absorption is in direct proportion to the quantity of alcohol present (breath-alcohol). However, only IR-radiation of a specific wavelength will absorb alcohol. The two predominantly utilized wavelengths are centered at 3.39 and 9.5 μm. The latest generation instrumentation monitors IR absorption at 9.5 μm because the measurements are far less prone to interference from any hydrocarbons and acetone which absorb IR energy at 3.4 μm.

The most significant benefits of “real-time” IR absorption analysis (continuous measurement) requires understanding the dynamics of alcohol in the human breath. Some of these dynamics relate to gas exchange in the mucus membranes, residual alcohol in the upper respiratory tracks, belching, burping, Gastro Esophageal Reflux Disease (GERD), exhaled air volume, breath flow rates and the subject’s breathing pattern.

Only IR technology is capable of addressing these dynamic, physiological factors to determine a legitimate, rightful and legally as well as forensically justifiable breath alcohol measurement.

Advantages:
  • IR based equipment observes the breath-alcohol concentration throughout the subject’s exhalation. This allows the plot of the entire IR-absorption curve and the instrument’s intelligence to assure that.
  • The breath sample was of alveolar nature.
  • No residual or mouth alcohol was present.
  • The recorded absorption curve can be presented in court if the case is challenged.
  • The IR system does not have a limited life expectancy, will not fatigue with successive, high alcohol concentration test series and remains extremely stable for years.
  • These instruments are equipped with many other important peripherals and functionalities (please observe “Other required performance features for evidential breath testers” below).
  • IR instruments are today’s standard worldwide for legal, evidential breath-alcohol analysis and consequently face fewer legal challenges than all other breath testing devices and technologies. 
Disadvantages: 
  • IR instruments are larger in size thus, not suitable for portable, handheld operation.
  • These instruments are more expensive than handheld (screening) equipment employing solid-state or EC sensors.


Various human specimens can be considered for measuring a person’s alcohol concentration level. All body fluids as well as expired breath are legitimate specimens for alcohol concentration measurements. However, the two most popular methodologies for medico legal alcohol testing are blood analysis and breath analysis.


Roadside tests or so called screening tests are conducted with handheld, mainly EC based instruments. These instruments are portable, battery operated and provide quick test results.The main objective of these tests are for confirmation of probable cause for submission to an evidential test procedure.