WO1997038292A1 - Acoustic condition monitoring of objects - Google Patents

Acoustic condition monitoring of objects Download PDF

Info

Publication number
WO1997038292A1
WO1997038292A1 PCT/NO1997/000069 NO9700069W WO9738292A1 WO 1997038292 A1 WO1997038292 A1 WO 1997038292A1 NO 9700069 W NO9700069 W NO 9700069W WO 9738292 A1 WO9738292 A1 WO 9738292A1
Authority
WO
WIPO (PCT)
Prior art keywords
objects
plot
signal
data
frequency
Prior art date
Application number
PCT/NO1997/000069
Other languages
French (fr)
Inventor
Per-Einar Rosenhave
Original Assignee
Rosenhave Per Einar
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from NO961446A external-priority patent/NO961446D0/en
Priority claimed from NO971017A external-priority patent/NO971017D0/en
Application filed by Rosenhave Per Einar filed Critical Rosenhave Per Einar
Priority to AU24130/97A priority Critical patent/AU2413097A/en
Publication of WO1997038292A1 publication Critical patent/WO1997038292A1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01MTESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
    • G01M13/00Testing of machine parts
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B23MACHINE TOOLS; METAL-WORKING NOT OTHERWISE PROVIDED FOR
    • B23QDETAILS, COMPONENTS, OR ACCESSORIES FOR MACHINE TOOLS, e.g. ARRANGEMENTS FOR COPYING OR CONTROLLING; MACHINE TOOLS IN GENERAL CHARACTERISED BY THE CONSTRUCTION OF PARTICULAR DETAILS OR COMPONENTS; COMBINATIONS OR ASSOCIATIONS OF METAL-WORKING MACHINES, NOT DIRECTED TO A PARTICULAR RESULT
    • B23Q17/00Arrangements for observing, indicating or measuring on machine tools
    • B23Q17/09Arrangements for observing, indicating or measuring on machine tools for indicating or measuring cutting pressure or for determining cutting-tool condition, e.g. cutting ability, load on tool
    • B23Q17/0952Arrangements for observing, indicating or measuring on machine tools for indicating or measuring cutting pressure or for determining cutting-tool condition, e.g. cutting ability, load on tool during machining
    • B23Q17/098Arrangements for observing, indicating or measuring on machine tools for indicating or measuring cutting pressure or for determining cutting-tool condition, e.g. cutting ability, load on tool during machining by measuring noise
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B24GRINDING; POLISHING
    • B24BMACHINES, DEVICES, OR PROCESSES FOR GRINDING OR POLISHING; DRESSING OR CONDITIONING OF ABRADING SURFACES; FEEDING OF GRINDING, POLISHING, OR LAPPING AGENTS
    • B24B49/00Measuring or gauging equipment for controlling the feed movement of the grinding tool or work; Arrangements of indicating or measuring equipment, e.g. for indicating the start of the grinding operation
    • B24B49/003Measuring or gauging equipment for controlling the feed movement of the grinding tool or work; Arrangements of indicating or measuring equipment, e.g. for indicating the start of the grinding operation involving acoustic means

Definitions

  • the present invention relates to a method of detecting and processing acoustic signals emitted from resiprocating, oscillating or rotating objects for the purpose of recording and predicting changes in the condition of the objects.
  • the invention also relates to an apparatus adapted to carry out the method.
  • the method is useful in the on-line or off-line prediction of conditions and performance of internal components in machinery, so as to allow for so-called condition based maintenance, for example, whereby no repair needs to be done before measurements and analysis of a component indicate that replacing the component really is necessary.
  • US Patent no. 5 361 628 describes a system for processing test measurements collected from an intemal combustion engine for diagnostic purposes.
  • the system described is intended for cold- testing of newly manufactured engines.
  • the engine being tested is cranked at a prescribed speed via an external motor, with no fuel supply or ignition.
  • Various kinds of process data such as oil pressure, inlet and outlet pressure, and torque diagrams, are used as input data to the analyses. It is mentioned that data for the analysis may also be entered from diagnostic sensors, such as acoustic sensors.
  • the known method is based on so-called trigging, and the process of filtering and pre-processing of acquired signals comprises various "equalizing" measures, including data reduction, mean-weighted mean, and AC removal, the reason being that the information is found directly in the timesignal.
  • Principal Component Analysis (PCA - described below) is used to "observe" the data in a more surveyable form, before the PCA results, during the subsequent classification and analysis, are used as input data to neural networks and other classification methods.
  • the use of neural networks facilitates that fault conditions not being classified before, can be detected and classified.
  • the prior art method is capable of modelling non-linearitities in the mass of data.
  • the mill but is entirely devoted to product control. Even so, the method makes use of acoustics, anaiogue-to-digital conversion, frequency transformation and Principal Component Analysis (PCA) or Partial Least Squares (PLS - also described below) to provide the desired information.
  • PCA Principal Component Analysis
  • PLS Partial Least Squares
  • the present invention relates primarily to the task of quantifying the condition of single components in an operating machinery, in such a manner that the result is useful for monitoring, maintenance and control purposes. To this end, it is a commonly known technique to carry out the following four processing steps:
  • step 1 data may be acquired from various sources.
  • acoustic sensors such as acoustic emission sensors, accelerometers, speed or position meters, microphones, etc.
  • Even indirect sensors measuring vibration by means of laser light reflection may be used.
  • relevant data of the process monitored in addition to the acoustic signature.
  • data may be the number of revolutions, fluid flow, temperature, and power, for example. The data to be incorporated will depend on the present situation, and their relevance will be found from loading or loading weight plot of the final calibrated model.
  • the result should be presented to the end user in an a readily com ⁇ prehensible format, such as a column in a bar chart displayed on a screen, for example, directly indicating the reduction in performance or deterioration of a condition as a percentage compared to a fresh or new component, or some other reference.
  • the result may be present to the user by simple means, such as three coloured light signals, for example, green light representing normal operation, yellow light indicating a develop ⁇ ment or change in condition or performance which is undesirable, or soon needs attention, and red light indicating a condition or performance to be immeditately correct ⁇ ed to avoid breakdown.
  • step 2 the signals received from the transducer(s) employed are usually digitised prior to further processing.
  • the signals have to be sampled under such circumstances and at a sufficiently high rate that aliasing is avoided and the shape of the signal curve is best preserved.
  • the over-sampling should be as high as practically possible, that is in the range of 2 to 60 times higher than the highest signal frequency having significance to the phenomenon monitored. With a sufficiently high over-sampling, the problem of aliasing can be avoided without the use of low-pass filters, the raw signals in stead being digitally filtered and possibly resampled at a lower frequency.
  • minimum length of the time signal depends on the circumstances, as the signal must be of such a length that the frequencies characterising the phenomenon to be monitored are included an adequate number of times to be picked up.
  • this minimum length requirement is reduced as compared to methods whereby single frequencies only are decisive in respect of the result.
  • tests have shown that time series of a duration of 0.5 to 10 sec. cover most applications.
  • SKF Condition Monitoring M800A is a programmable machinery monitoring system that monitors, for example, radial vibration, axial position, temperature and speed of large screw compressors.
  • SKF Condition Monitoring M800A is a programmable machinery monitoring system that monitors, for example, radial vibration, axial position, temperature and speed of large screw compressors.
  • the prior art systems focus on monitoring individual critical components of a complex machinery, i.e. components such as bearings and shafts, or to be more specific; radial or axial displacement, misalignment, damaged gears, temperature, or general mechanical looseness.
  • signal analysis techniques like frequency analysis, enveloping, crest factor and sound intensity are frequently used. Such methods of analysis do not, however, extract all the available information inherent in the time and frequency domains.
  • an object of the present invention is to provide a method and apparatus which make better use of the information inherent in detectable signals emitted from working machinery, and the like; in particular, to facilitate condition based maintenance and operation of such machinery.
  • a method of detecting and processing acoustic signals emitted from resiprocating, oscillating or rotating objects to record and predict changes in the condition of the objects, the method comprising detecting and recording different types of signals emitted from said objects and having varying amplitude, wavelength or frequency, and processing said recorded signals mathematically, the result of said mathematical processing being treated further by means of multivariate calibration so as to obtain information about the condition of said objects.
  • the mathematical processing preferably involves carrying out a Fast Fourier Transform (FFT), or that an Angular Measuring Technique (AMT) is employed, to obtain resulting spectra for immidiate or later processing by means of said multivariate calibration, in order to separate said conditions of the objects from one another.
  • FFT Fast Fourier Transform
  • AMT Angular Measuring Technique
  • the method comprises the step of, in advance, producing a suitable multivariate model, preferably based on empirical date of the ideal condition of the objects, so as to allow the result of said multivariate calibration to be used to determine where the object is positioned between said ideal condition and a state of breakdown.
  • a suitable multivariate model preferably based on empirical date of the ideal condition of the objects, so as to allow the result of said multivariate calibration to be used to determine where the object is positioned between said ideal condition and a state of breakdown.
  • one or more of a variety of methods may be employed for the mathematical signal processing when the general signal conditioning has been carried out, such methods include the methods of Gabor, Wavelets, and Wiegner-Ville, in addition to Fast Fourier Transform (FFT) and Angular Measuring Technique (AMT), the latter two only being discussed in detail below.
  • FFT Fast Fourier Transform
  • AMT Angular Measuring Technique
  • the method of the invention utilizes signals in the frequency band ranging from DC to MHz to characterise certain conditions of an object; that is, signals in the whole acoustic spectrum can be used, and normally no specific frequency range has to be selected; the effect being that, as the information is collected from a wider range of frequencies, the characterisation also is improved.
  • the method makes it feasable to construct instruments that are capable of simultane- ously quantifying the condition of one and more components of different kinds comprised within the same piece of machinery.
  • This is because of the efficient methods of frequency transformation and phenomenon characterisation being used, as the invention also permits the use of transformation methods (such as Gabor, Wavelets, and Wiegner- Ville) simultaneously taking account of both the frequency and the time domain, so-called joint time-frequency methods.
  • transformation methods such as Gabor, Wavelets, and Wiegner- Ville
  • This approach according to the invention produces a signal fed to the multivariate model which contains more, and partly more diverse, information about the phenomena to be quantified; hence, enabling a better discrimination and isolation of the various phenomena, and, therefore, a more correct quantification thereof.
  • the Gabor method would give a better and more robust characterisation of the base signal, with more degrees of freedom, than traditional Fourier transforms.
  • both acoustic signals and process data are made use of at the same time.
  • the method allows for a reduction in the number of sensors needed at the present to perform certain control tasks.
  • Furhermore, experi- ence already gained with respect of a certain installation or piece of machinery can be recorded in an experience data base, and by connecting the output of the multivariate analysis to such a data base, it is possible to establish a system whereby quantified condition data is considered for the purpose of specifying specific measure to be carried out on the basis of the data and condition changes recorded.
  • quantified condition data is considered for the purpose of specifying specific measure to be carried out on the basis of the data and condition changes recorded.
  • the skills of professionals may be reduced at the same time as the efficiency and operating life of the monitored equipment increase.
  • the method accordning to the invention combines the best from well-proven sensor technologies with empirical calibration of virtually any type of instruments or signals.
  • the method does not rely on specific sensors, neither on one specific method of analysis. It is composed by a set of flexible options for the optimal combination of sensors and their accompanying pre-processing methods, which act as alternative, or complimentary, inputs for signal analysis and for multivariate instrument and signal calibrations. Adding the feature of multivariate calibration data analysis to these well known tasks, produces a new approach to condition based maintenance, this approach being named Acoustic Machine Condition Monitoring, or AMCM for short.
  • AMCM Acoustic Machine Condition Monitoring
  • Figure 1 is a block diagram of a typical data acquisition system
  • Figure 2 is a digitised sine wave with 3-bit resolution
  • Figure 3 shows a raw signal acquired from a pump
  • Figure 4 illustrates adequate (top graph) and inadequate (bottom graph) signal sam ⁇ pling
  • Figure 6 is a PCA scores plot showing the consequences of using different types of FFT parameters
  • Figure 7 is loading plot of PC1 vs. PC2 (p1 p2)
  • Figure 8 is a first impression of comparable spectra
  • Figure 9 is a flow chart of tests performed on a main sea water pump
  • Figure 10 is a matrix plot of a pre-opening data structure
  • Figure 11 is a PCA score-plot showing the paired replicates of 7 test-points
  • Figure 12 is a single vector plot showing the location of curves
  • Figure 13 is a data matrix plot of the X matrix used in the impeller wear- and damage analysis
  • Figure 14 is a residual X-variance plot
  • Figure 15 is a PCA score plot (t1t2) showing a partly successful discrimination between four experimental set-ups
  • Figure 16 is a PLS1-discrim score plot illustrating wear as a movement to the right, and a damage as a movement upwards
  • Figure 17 is a PLS1-discrim t1t2 plot including arrows indicating displacement due to wear and damage for different sensor locations,
  • Figure 18 is a t1t2 plot from PLS1-discrim with measurement points 1 , 5 and 7 forming triangles, each representing one wear or damage situation,
  • Figure 19 is a plot illustrating different positions of spectra from three measuring loca ⁇ tions on pump piping
  • Figure 20 is a loading weights plot from the PLS1-discrim
  • Figure 21 is a PCA loading plot showing different contributions of the variables to PCA and PLSsl-discrim, respectively.
  • Figure 22 is a t1u1 score plot showing clear groupings
  • Figure 23 is a t1t2 score plot from the PLS1 model showing the same tendencies as in the t1u1 plot of Figure 22,
  • Figure 24 is a diagram showing predicted values plotted vs. measured values of fuel injection valve performance.
  • Figure 25 is a diagram showing the resulting PCA score plots when frequencies below 7.8 kHz and above 32.8 kHz are removed.
  • DAQ Data acquisition systems
  • PCs personal computers
  • plug-in boards are used for a wide range of applications in the laboratory, in the field, and on the manufacturing plant floor.
  • Such data acquisition boards of the general-purpose type are well suited instruments for measuring voltage signals.
  • analogue signals it is usually not sufficient just to wire the signal source lines to such a data acquisition board.
  • FIG. 1 is a block diagram of a typical data acquisition system, and as can be seen from the figure, a typical data acquisition system consists of a transducer 2, which registers the physical phenomena 1 in question and converts the phenomena into more convenient form, for example a voltage or current signal 3.
  • This signal is fed to a front- end pre-processing unit 4 for signal conditioning, before it is delivered to the PC data acquisition unit 5.
  • This front-end pre-processing 4 is necessary, because many trans ⁇ ducers require a bias or excitation by current or voltage, bridge completion, linearisation, or high amplification for reliable and accurate operation. The integrity of the acquired data depends upon the entire analogue signal path.
  • the transducers most commonly used in such data acquisition systems convert physical quantities, such as temperature, strain, pressure and acceleration into electrical quan ⁇ tities, such as voltage, resistance or current.
  • the characteristics of the transducer actually being used define the signal requirement of the data acquisition system.
  • An example of such a transducer is the piezoelectric accelerometer which typically com ⁇ prises a slab of quartz crystal. Then, simply by squeezing it, a potential difference can be produced across the slab, but the crystal is not capable of a true DC response. Also, such piezoelectric elements will produce a charge only when acted upon by dynamic forces.
  • piezoelectric accelerometer when a piezoelectric accelerometer is vibrated, forces proportional to applied acceleration act on the piezoelectric elements. The charge generated is then picked up by electric contacts.
  • the piezoelectric element is characterised by an extreme linearity over a very wide dynamic range and frequency range.
  • the sensitivity of a piezoelectric material is usually specified in pC/N (picoCoulomb per Newton) and the sensitivity of an accelerometer in mV/g.
  • the frequency range can easily span 1 Hz to 25 kHz.
  • Piezo ⁇ electric crystals are anisotropic, i.e. have different properties in different directions.
  • the accelerometer exhibits directional properties which are characterised by a transverse sensitivity down to as low as 1 % of the reference sensitivity.
  • Adequate signal conditioning equipment will improve the quality and performance of a system, regardless of the type of sensor or transducer being used, and typically the signal conditioning functions include functions such as amplification, filtering and isolation of any type of signals.
  • the signal should be amplified, so that the maximum voltage range of the conditioned signal equals the maximum input range of the data acquisition board.
  • the noise level of the accelerometer is less than 40 ⁇ V (RMS) (such as Br ⁇ el &
  • the smallest signal that can be recorded is 40 ⁇ V, giving a sensitivity of 0.040 ms "2 , and if the input wires from the accelero- meter travel 10 m through an electrically noisy plant environment before reaching the data acquisition board, the various noise sources in the environment may possibly induce as much as 200 ⁇ V in the wires, and a noisy acceleration reading of about 5 ms "2 will result.
  • unwanted signals should be removed from the analogue accelerator signal by means of a filter.
  • noise filters which are often used to attenu- ate high frequency variations in DC-class signals, such as signals representing tempera ⁇ ture; for AC-class signals, such as signals representing vibration, a different type of filter, an antialiasing filter, is more often employed.
  • Such antialiasing filters are low-pass filters with a very steep cut-off rate, almost completely removing all frequencies higher than a given frequency If such frequencies were not removed, they would erroneously appear as valid signals within the actual measuring bandwidth.
  • signal conditioning is often used to isolate the transducer signals from the computer for safety reasons.
  • the system being monitored may contain high-voltage transients that could damage the computer.
  • Another reason for implementing isolation is to make sure that the readings from the data acquisition board is not affected by differences in ground potentials or common-mode voltages. If the input signal and the acquired signal are each referenced to "ground”, problems will occur if there is a potential difference between the two "grounds” This difference will form a so-called ground loop, which may cause inaccurate representation of the acquired signal, or, if too large, may damage the measuring system. Isolation of the signal conditioner prevents most of these problems by passing the signal from its source to the measurement device without a galvanic or physical connection. Isolation breaks ground loops, rejects high common-mode voltages, and ensures that the signals are accurately acquired
  • the signal condu jn- ing unit usually generates excitation of such transducers.
  • strain gauges, thermistors, and accelerometers require external supply of voltage or current excitation Vibration measurements are usually made with constant current source that converts the variation in resistance to a measurable voltage
  • Some transducers e.g. thermocouples, have a non-linear response to changes in the phenomena being measured. Therefore, linearisation also is often performed by the signal conditioning unit before the signal is fed to an analogue-to-digital converter included in the conditioning unit (see below).
  • the product specifications of standard data acquisition board indicate features such as the number of channels, sampling rate, resolution, range, accuracy, noise, and non- linearity, all of which influence the quality of a digitised signal.
  • the number of analogue channel inputs is specified both for both kinds of inputs.
  • Single-ended inputs are all referenced to a common ground point, and should be used when the input signals are high-level (higher than 1 V) signals, the cables from the signal source to the analogue input hardware are short (less than 3 m), and all input signals share the same common ground reference. If this is not the case, differential inputs should be used. With differential inputs, each input has its own ground reference. Noise errors are reduced because common-mode noise picked up by the wires are cancelled out.
  • the data acquisi- tion board or the computer itself comprises an analogue-to-digital converter (ADC) adapted to convert the input analogue signal to a digital value.
  • ADC analogue-to-digital converter
  • the sampling rate of the converter determi ⁇ nes how often conversion is to take place. A faster sampling rate acquires more points in a given time span and thus can often create a better representation of the signal.
  • the Nyquist Sampling Theorem states that sampling must be performed at minimum twice the rate of the maximum frequency component subjected to detection.
  • a simple heuristic says that in order to recreate a signal properly after digitisation, the signal must be over-sampled at 8 to 10 times the rate of the maximum frequency component present.
  • multiplexing is a commonly used technique whereby multiple channels are routed to a single analogue-to-digital converter.
  • the analogue-to-digital converter then samples one channel, switches to the next channel and samples the next channel, switches to the following channel, and so on. Since the same analogue-to-digital converter is used to sample many channels instead of one, the effective rate of each channel is inversely proportional to the number of channels sampled.
  • the resolution is the number of bits that the analogue-to-digital converter uses to represent the analogue signal. The higher the resolution, the higher the number of input voltage divisions the range is broken into, and, therefore, the smaller the detectable voltage change will be.
  • Figure 2 shows a sine wave and its corresponding digital image as obtained by an ideal 3-bit analogue-to-digital converter.
  • a 3-bit analogue-to-digital converter (which is actually seldom used, but a convenient example), divides the analogue range into 2 3 , or 8 divisions. Each division is repres ⁇ ented by a binary code between 000 and 111. Clearly, information is lost in the conversion, so a 3-bit code is not a good representation of the analogue signal.
  • the number of codes from the analogue-to-digital converter increases from 8 to 65536, and a very good representation of the original analogue signal is obtained.
  • the range refers to the minimum and maximum voltage levels that the analogue-to- digital converter is able to quantify.
  • Most analogue-to-digital converter boards offer selectable ranges, so that the board is configurable to handle a variety of different voltage levels. By changing the range, the analogue signal can be adapted to the range of the analogue-to-digital converter board, so that the signal is measured with maximum accuracy.
  • the code width is a function of the range, resolution, and gain available on the analogue-to-digital converter board.
  • the ideal code width is found by dividing the voltage range by the gain times two raised to the order of bits in the resolution.
  • a grounded or ground referenced measurement system is one in which the voltage signal is referenced to the construction system ground.
  • a grounded signal source is best measured with a differential or non-referenced measurement system.
  • the voltage measured will in this case be the sum of the signal voltage and the potential difference that exists between signal ground and the measure- ment system ground. This potential is generally not a DC level.
  • the result is a noisy measurement system often showing power-line frequency components in the readings. Noise which is introduced by ground-loops may have both AC and DC components introducing offset errors as well as noise in the measurements.
  • the potential difference between the two grounds causes a current - called the ground-loop current - to flow into the interconnection.
  • a ground referenced system can still be used if the signal voltage levels are high and the interconnection wiring between the source and the measurement device has a low impedance. If this is the case, the signal voltage measured is degraded by the ground-loop, but the degradation may be tolerable.
  • the differential (DIFF) or the non referenced single-ended (NRSE) input on a typical data acquisition board, both provide a non-referenced measurement system.
  • any potential difference between references of the source and the measuring device appears as common-mode voltage to the measurement system, and is subtracted from the measured signal.
  • steps are preferably taken to have non-referenced signal sources, i.e. a floating source and differential input configuration.
  • resistors are included to provide a return path to ground for instrumenta ⁇ tion amplifier input bias currents.
  • the measurement is non- referenced.
  • the signal is floating with respect to ground.
  • Floating signal sources can be measured using both differential and single-ended measurement systems. If a differential measurement system is used, then care should be taken to ensure that the common-mode voltage level of the signal with respect to the measurement system ground remains within the common-mode input range of the measurement device.
  • two bias resistors are connected between each wire and measurement ground. The bias resistors prevent the input bias current in the instrumentation amplifier from moving the voltage level of the floating source out of the valid range of the input stage of the data acquisition board. This serves to prevent offset errors as well as noise in the measurement.
  • the two resistors included in such an arrangement provide a DC path from the instrumentation amplifier input treminals to the instrumentation amplifier ground. Failure to use such resistors will result in erratic or saturated (positive full-scale or negative full-scale) readings.
  • the two resistors must be of value large enough to allow the source to float with respect to the measurement reference, and not load the signal source, and small enough to keep the voltage within the range of the input stage of the analogue-to-digital converter board.
  • a signal conditioning unit ought to be versatile. It should handle many different types and makes of transducers. It should also be able to add gain to signals, or even attenuate signals from the transducer. Finally it must handle aliasing problems. Thus, the following criteria or parameters should be taken into account in the design of a signal conditioning unit:
  • the trigging device should be TTL compatible and deliver every pulse, or every second pulse, at choice.
  • the output should be of low impedance and be able to source at least ⁇ 5 mA with a swing of ⁇ 8 V.
  • the overall gain bandwidth product should be at least 100 MHz.
  • a suitable amplifier arrangement comprises one monolithic instrumentation amplifier PGA 202 available from Burr-Brown, having digitally controlled gains of 1 , 10, 100, and 1000, and connected to one model PGA 203 amplifier from the same supplier, providing gains of 1 , 2, 4, and 8. Both amplifiers have TTL or CMOS compatible inputs for easy micro ⁇ processor interfacing. As the two channels making up the signal conditioning or signal amplifying unit are identical, only one channel will be referred to in the following descrip ⁇ tion.
  • the amplifiers PGA 202 and PGA 203 both have FET inputs, which give extremely low input bias currents. Because of the FET inputs, the bias currents drawn through input source resistors have a negligible effect on DC accuracy. The picoamp currents produce merely microvolts through megohm sources. A return path for the input bias currents is provided through 1 Mohm resistors connected between the inputs and analogue ground (AGND). Without this return path, the amplifier could wander and saturate because of possible stray capacitance or any current leakage through the coupling-capacitors. These capacitors prevent the excitation of the transducer to reach the instrumentation amplifier inputs. Failing to do so will make the instrumentation amplifier saturate immediately.
  • the output stage of the amplifiers PGA 202 and PGA 203 is a differential trans- impedance amplifier with laser-trimmed output resistors to help minimise the output offset and drift.
  • the rated output current is typically ⁇ 10 mA, when
  • the output impedance is 0.5 ohm. All power supplies to both instrumentation amplifiers are decoupled with 1 ⁇ F tantalum, and 0.1 ⁇ F and 1 nF ceramic capacitors. The capacitors are located as close to the instrumentation amplifier as possible for maximum performan ⁇ ce. To avoid noise, gain and CMR errors, the digital ground (DGND) and the analogue ground (AGND) are separated.
  • DGND digital ground
  • AGND analogue ground
  • the first instrumentation amplifier is designed with an output offset adjustment circuit, while the second amplifier has both input offset and output offset adjustment circuits.
  • the adjustment circuits are individually adjustable.
  • the choice of buffering an operation ⁇ al amplifier is very important for the performance of the output offset adjustment circuit.
  • the Burr-Brown OPA 602 is used, because of its low impedance and wide bandwidth.
  • the offset adjustment controls are accessible from the front panel of the signal conditioner.
  • Gain selection is accomplished by the selection of a two-bit word to the gain select inputs, AO to A3, and gains can be selected from 1 to 8000 when having one PGA 202 and one PGA 203. With for instance two PGA 203 amplifiers cascaded, the gain would be selectable in 16 steps with a gain ranging from 1 to 64.
  • the gain selection is set in two ways: either by setting the hardware switches on the front panel, or by the computer software, provided the hardware switches are set in "ON" position, reading the digital word "1111 ". The hardware switches directly reflect the gains selectable. On the single channel signal amplifying unit these signals should be "floating", and the switches placed in the intermediate position.
  • the cut-off frequency of the output amplifier is set to 1 MHz. With a total system gain less than 1000, the gain will be constant over a fre ⁇ quency spectrum ranging from DC to 1 MHz. With system gains greater than 1000, the gain will be constant over a frequency spectrum ranging from DC to 100 kHz.
  • the signal conditioning unit is equipped with a 10 pole inverse Chebyshev filter.
  • This filter is realised by the use of five cascaded UAF42 universal active filters available from Burr-Brown.
  • the UAF42 is a monolithic integrated circuit (IC) which contains the operational amplifiers, matched resistors, and precision capacitors needed for a state- variable filter pole-pair.
  • a fourth uncommitted operational amplifier is also included on the die.
  • active filter design and verification are tedious and time consuming.
  • Burr-Brown provides a computer-aided filter design program under the trade name FilterPro. This program is used to design and implement the antialiasing filter.
  • the inverse Chebyshev filter type is chosen and implemented because this filter type has a flat magnitude response in the pass-band with a steep rate of attenuation in the transition-band. Ripple in the filter's stop-band and some overshoot and ringing in the step response, are undesirable but unavoidable. They are considered as being without significance in this application. Since the analogue-to-digital converter employed has a maximum sampling rate of 100 kS/s, the filter cut-off frequency is set at 47.0 kHz with a stop-band attenuation of -100 dB. Then the -3 dB frequency is 25.51 kHz, so that the whole audio frequency domain ranging from DC to 20 kHz will be unattenuated even when the filter is in use.
  • the last segment in the data acquisition system chain consists of the computer, the data acquisition board and the acquisition software.
  • the performance of the named three components is crucial to the performance of the whole data acquisition system, Optimalisation can be achieved in several ways, with respect to price, weight, or processing speed. In the following versatility, cost-effectiveness and portability are given most consideration.
  • the computer used for this purpose is a Dell Latitude 90XPi which is a standard off-the-shelf lap top computer with a 90 MHz Intel Pentium processor, 40 Mbytes RAM and a 1.2 Gbytes removable hard disk. In addition, the computer is equipped with two PCMCIA slots suitable for National Instruments' DAQCard 1200.
  • DAQCard 1200 The National Instruments DAQCard 1200 is used for data acquisition.
  • DAQCard 1200 is a low-cost, low-power, analogue input, analogue output, digital I/O card for personal computers equipped with a PCMCIA Type ll slot.
  • the card contains a 12-bit, successive- approximation analogue-to-digital converter having eight inputs which can be configured as eight single-ended or four differential channels.
  • the card also has 12-bit digital-to- analogue converters (DACs) having voltage outputs, 24 lines of TTL- compatible digital I/O and three 16-bit counter/timer channels for timing I/O. All these facilities are available through a 50-pin connector and cable which plugs directly onto the card.
  • DACs digital-to- analogue converters
  • the analogue input circuitry of DAQCard 1200 consists of two analogue input multi ⁇ plexers, mux counter/gain select circuitry, a software-programmable gain amplifier, a 12- bit ADC, and a 12-bit FIFO memory that is sign-extended to 16 bits, tn the present case, only a few of the facilities provided by the DAQCard are required for collecting the data.
  • ADC itself will briefly be discussed, because a certain knowledge of the operations of the ADC is important in order to understand the acquisition software.
  • the 12-bit resolution of the converter allows the converter to resolve its input range into 4096 different steps.
  • the ADC clocks the result to A D FIFO.
  • This FIFO serves as a buffer to the ADC.
  • the A/D FIFO can collect up to 1024 A/D conversion values before any informa ⁇ tion is lost, thus allowing software some extra time to catch up with the hardware.
  • An error condition called FIFO overflow, occurs if more than 1024 samples are stored in the FIFO before being read. This error will result in a loss of information and must be avoided.
  • the output from the ADC can be interpreted as either straight binary or two's comple ⁇ ment.
  • the DAQCard works in bipolar mode, and the data from the ADC is then interpreted as a 12-bit two's complement number having a range of -2048 to +2047.
  • the output from the ADC is sign-extended to 16-bits, causing either a leading 0 or a leading F Hex to be added, depending on the coding and the sign.
  • data read from the FIFO are 16 bits wide.
  • a data acquisition operation refers to the process of providing one sequence of A/D conversions when the sample interval is carefully timed.
  • the data acquisition timing circuitry consists of various clocks and timing signals that control the data acquisition, gate the data acquisition operation, and generate scanning clocks.
  • Data acquisition operations are initiated either externally, or through software control.
  • the data acquisi ⁇ tion operation is terminated either internally by the counter A1 of the 82C53(A) counter/- timer circuitry which counts the total number of samples taken during a controlled operation, or thorough software control in free-run mode.
  • samples are taken at regular intervals without any delay. Therefore, the samples are each taken using the same sample time interval. This applies to data acquisition in both free-run or controlled operation.
  • LabVIEW graphical programming software package developed by National Instruments Corporation, and called LabVIEW, is used to program the data collecting algorithms, as this programme has become an industry standard development tool for test and measurement applications.
  • LabVIEW is a specially designed graphical programming system for data acquisition and control, data analysis, and data presentation. It offers a programming methodology in which software objects, called virtual instruments (Vis), are assembled graphically.
  • Verification software objects
  • Programming in LabVIEW means building Vis instead of writing programs, and in this respect it is different from other programming applications, such as C++ or BASIC.
  • C++ computer programming applications
  • Lab ⁇ VIEW is a general-purpose programming system, but it includes libraries of functions and development tools designed specifically for data acquisition and instrument control.
  • LabVIEW Virtual Instruments are similar to the functions of conventional language programs.
  • a VI consists of an interactive user interface, a dataflow diagram that serves as the source code, and icon connections that allow the VI structure to be called from higher level Vis.
  • the VI front panel displayed on the computer screen consists of two graph windows together with controls and indicators for different hardware settings. Under normal use the upper graph window shows on-line the acquired time domain wave-form, while the second window shows a real time Fast Fourier Transform (FFT) of the acquired wave- form. Controls for setting sampling rate, frame size, number of averages to be used, and display settings, are also shown. The acquisition start button and the save-timefile button are located in the same area.
  • FFT Fast Fourier Transform
  • Hardware controls which are seldom in use, are hidden during normal runs. They can, however, be reached by activating scroll bars.
  • the hardware controls are used for setting channel number, trigger type, scan clock, and threshold limits for analogue trigging.
  • the ultimate goal is to enable the user to decide which sampling parameters to choose for the best result, based on the actual sampling problem.
  • the VI must be adaptive, reliable and easy to operate.
  • the user interface shall be intuitive for a user with only little or some experience from data acquisition.
  • the VI must be able to display FFTs and time domain wave forms in real time.
  • the VI must be able to apply different windowing techniques to the FFTs.
  • the VI shall store the required amount of data to disk as a continuous stream of data.
  • the VI shall write down a timefile capable of holding 5 to 10 seconds of continuos data sampled at highest speed.
  • This timefile shall be organised as a matrix of stacked vectors each containing 50 000 samples. The number of vectors stacked beneath each other will then determine the length of time domain stored. For instance, 15 vectors would store 9.4 seconds, if the sampling rate is 80 kS/s.
  • the Fast Fourier Transform (FFT) and the power spectrum are powerful tools, which often are used for analysing and measuring signals collected by data acquisition equipment.
  • FFT Fast Fourier Transform
  • the basic computational and fundamental issues needed to understand Fast Fourier Transforms shall be descibed.
  • antialiasing and questions related to acquisition front ends will be discussed.
  • Directional operation accelerometers are often used to monitor acoustic emissions from rotating and reciprocating machinery.
  • the signal from the accelerometer contains information about the component being monitored. Given that it is possible to character- ise this type of signals and then compare signals recorded at different periods of time from the same component of machinery, it should be possible to detect the condition and performance of that component.
  • Figure 3 shows a plot of a raw signal from a pump. If a characterising signal of the pump is recorded when the pump is new and operates without faults, its acoustic spectrum can be compared with spectra recorded later.
  • the Fourier Transformation maps time domain functions into frequency domain repres ⁇ entations and is defined as :
  • x(t) is the time domain signal and X(f) is the Fourier Transform.
  • DFT Discrete Fourier Transform
  • x is the input sequence
  • X is its corresponding DFT
  • n is the number of samples both in the discrete time and the discrete frequency domain.
  • a measurement based on Fast Fourier Transforms requires digitisation of a continuous signal.
  • the sampling frequency, F s must be at least twice the maximum frequency component in the signal. If this criterion is violated, a phenomenon occurs, known as aliasing. Aliasing is simply a misinterpretation of high frequencies as lower frequencies, as illustrated in Figure 4, in which the upper graph shows an adequate signal sampling rate, whereas the the lower graph shows an inadequate signal sampling rate. The lower graph demonstrates aliasing, interpreted as a low frequency signal, but the signal is fake and due only to a to low and inadequate sampling rate.
  • the stroboscope is in fact an aliasing device which is designed to represent high frequencies as low ones. It has also the ability to represent zero frequency, i.e. to apparently "freeze" a picture.
  • Figure 5 shows the alias frequencies appearing when a signal having real components at 25, 70, 160 and 510 Hz, is sampled at 100 Hz. Alias frequencies then appear at 10, 30 and 40 Hz. In Figure 5 dotted arrows indicate frequency components due to aliasing, while solid arrows indicate actual frequencies.
  • FIG. 6 depicts PCA scores plot showing the consequences of using different types of FFT parameters.
  • the original valve is referred to by numerals b1 and b2, the defect valve by reference numeral d, and a refitted valve by reference numeral r.
  • Suffixes 0, 1 , 2, and 3 denote the different types of technique used during the production of the FFT:
  • a Matlab function capable of performing such a task is the PSD routine found in the Matlab Signal Processing Toolbox.
  • the PSD routine estimates the power spectrum of the sequence x using the Welch method of spectral estimation.
  • - Window specifies a windowing function and the number of samples psd uses in its sectioning of the x vector.
  • - nooverlap is a variable indicating the number of samples by which the sections overlap.
  • Estimating 220 FFTs in an efficient way means constructing a well designed body for the PSD function.
  • the purpose of this routine is to act as an interface between the user and the PSD function.
  • the routine will ask for the necessary parameters and return two types of FFT files: One with the extension ⁇ LFT (long FFT) which contains as many points as the number of samples in the time domain file; the other, with the extension ⁇ RFT (reduced FFT), being the result of a moving average performed on the * LFT, thus producing an FFT with a user specified number of points.
  • ⁇ LFT long FFT
  • ⁇ RFT reduced FFT
  • the routine will save and/or print a graph of each file containing a Reduced Fourier Transform (RFT). All results will be collected and saved (as an UNC-file) together with a file (UNN-file) containing the object names, - actually the file-names, in the order they were processed. The routine will ask for the sampling rate and the number of variables wanted in the reduced FFT-file. This routine assumes standard DOS file-names to be used. A full listing can be found in the appendix.
  • RFT Reduced Fourier Transform
  • Show routine is used to produce plots of absolute values of the power spectrum magnitude over a 40kHz frequency range and over a 20kHz frequency range. This routine also produces plots depicting the same information, but with the power spectrum magnitude given in dB.
  • the result of the Makeffts routine is composed of several ⁇ LNG-files, vectors, 32769 variables long, each variable representing an absolute value in the power spectrum.
  • the averaging routine process these vectors and performs a user specified moving average over the frequency domain, and a new dB40-file containing one averaged vector for each initial LNG-file, is produced.
  • the result is saved in the form of the previously mentioned dB40-file in ASCII format and is as the basis of the input to the data analysis programme.
  • Multivariate calibration or data analysis is a technique developed for optimal extraction of information from various forms of data by applying the necessary mathematical and statistical tools. Over the years many techniques have been developed and adopted by several professions. In short, multivariate data analysis is a technique of decomposing a matrix of raw data into two parts: a structural part and a noise part.
  • the data matrix usually consists of p variables characterising n objects. If the purpose of the analysis is to reveal latent structures within the matrix itself, i.e. to find correlations between variables and relations between objects, a technique called Principal Compo- nent Analysis (PCA) is often used. If the purpose of the analysis tends towards estimat ⁇ ing a property and finding the variables correlating with the property, a different appro- ach, called Partial Least Squares Regression (PLS-R) can be used. Partial Least Squares Regression is often called Partial Least Squares, or PLS for short.
  • PCA Principal Compo- nent Analysis
  • Principal component analysis is a technique for decomposing a data matrix, X, to a varying number of orthogonal components.
  • PCA is used in order to decompose the p-dimensional data swarm into a number of A new dimensions or Principal Components (PCs).
  • Principal component 1 represents the direction of maximum variance
  • principal component 2 represents the second largest variance, orthogonal to principal component 1.
  • Principal component 3 is orthogonal to principal component 2, and represents the third largest variance. This process is repeated until the number of principal components equals the p dimensions of the original data swarm. It is a fundamental assumption in this kind of analysis that the direction with maximum variance is more or less (directly) connected to any latent structures in the raw matrix. Further- more, usually only a few of the new principal components are used in the analysis, so the method actually represents a major reduction of dimensions. As a centre for this new system of principal components, the average point given by Equation 3, is used.
  • the matrix product TP T consists of the score matrix T and a transposed loading matrix P.
  • the Score-matrix, T contains row-wise co-ordinat- es for each object.
  • Score and loading plots are two central concepts of principal component analysis. A score plot is simply score vectors plotted against each other, visualising the "foot-prints" of the original objects in the X matrix. Scores visualise the co-ordinates of the objects in the new system defined by the principal components. A score plot will therefore show how objects relate to one another. Score plots are said to be "a map of object relationships". Similarly, by plotting the loading vectors of the loading matrix against one another, loading plots will be produced.
  • a loading plot shows the direction of each principal component in relation to the original co-ordinate system and will therefore also show how variables relate to one another.
  • the columns of a loading-matrix, P contains the co-ordinates for each variable and represent the trans- formation-matrix between the X-space and the PC-space.
  • Figure 7 depicts a loading plot of PC1 vs. PC2 (p1 p2) and shows an example of a loading plot where V1, ...,V6 are different variables.
  • V1 , V2, V3 are interrelated, or positively correlated, in some way, as they are grouped closely together.
  • Variables V4 and V5 also appear to be related to one another, but they are certainly not positively correlated to V1 ,V2 and V3, as the distance between the two groups is almost the largest possible.
  • the distance from origo to each of the two groups is almost identical, and both groups are located very close to the X-axis.
  • the two groups of variables thus identified use different signs to describe the same phenomenon, i.e. they are negatively correlated.
  • Outlier detection and control are important aspects of all data analysis. Outliers are atypical variables or objects. If the outliers result from an erroneous measurement, they must be removed. Failing to do so will corrupt the model. On the other hand, if they represent some important phenomenon, the model will suffer severely if they are removed. If removed, the model will be unable to explain a property that is, in fact, present in the data. Identifying and removing outliers may be difficult, time-consuming, and requires experience.
  • the E-matrix or the matrix containing the unmodelled "noisy" part of X can be thought of in terms of "lack of fit". It contains parts of X which has not been explained or taken care of by the model TP T .
  • the modelled variance shows the part of X which was taken care of by the model, and from a plot (not shown) of modelled variance it can be found that one principal component models approximately 55% of the total variance. Two principal components would be able to model approximately 77% of the total variance.
  • PLS includes a new matrix, Y, which consists of the dependent variables, whereas the X-matrix contains the independent variables.
  • the idea is to let the depen ⁇ dent variables Y interact with the independent variables X, while the PLS-components are established.
  • a PLS-component is the Y-guided analogue of a principal component in an X to Y regression sense. In this way it is possible to establish a regression model for the relation between X and Y. Such a model can then be used for later prediction. If the model is calibrated in the right way, and there is indeed a connection between X and Y in the data, it is possible to find a new Y from a given X, or predict Y from X.
  • the "noise” can be used as variables in the X-matrix to predict the condition of the machine as the variable in Y.
  • PLS may at first be regarded as "individual" PCAs performed on the X-matrix and on the Y-matrix, respectively, but with the very important difference that in the case of a PLS, the two models are not independent.
  • the structure of the Y-matrix is used as a guideline while decomposing the X-matrix to determine the model.
  • PLS the idea is not to describe as much as possible of the variation in X, but to seek that part of X that is most relevant for the description of Y.
  • PLS1 and PLS2 Two types of PLS exist: PLS1 and PLS2. They differ in that a PLS1 models only one Y-variable, whereas a PLS2 can model several Y-variables.
  • PLS When using PLS, it is important to study the score vectors t (from the X-space) and plot them against the score vectors u (from the Y-space).
  • score vectors t from the X-space
  • u from the Y-space
  • PCA different score vectors are plotted against one another in the form of a t-t plot.
  • the t-u plots from the PLS model are used to spot outliers as well as to indicate regression and linearity. Erroneous measurements can often be seen in the t-u plot as objects laying far off the "main stream”.
  • y t ⁇ ⁇ + y, where y is the mean value of calibration data Y-values, and 6 is the estimated relationship between the scores of X and Y in the model.
  • the P loadings are to be interpreted in the same way as in PCA, except for the fact that they are calculated by PLS. Just as in PCA, they express the relation between the raw-data matrix X and its scores (t).
  • the difference between PCA and PLS is due to the guidance from the Y-space while decomposing the X-space in PLS.
  • the loading weights, W are the effective loadings directly connected to the relationship between X and Y and a result of the inner regressi ⁇ on. In other words, the difference between P and W is an expression of how much the Y-guidance has influenced on the decomposition of X.
  • the purpose of validation is to make sure that the derived model will be valid for future predictions using new data of a similar character.
  • the goal of calibration is to derive a model having the best prediction ability. Then, it is natural to let the results from the validation decide the number of PLS-components to use when modelling.
  • the number of PLS components giving the lowest predictive Y-residual variance is selected, but if the residual variance plot shows a clear break before the bottom point, this may indicate that fewer components should be used in the modelling. This is especially the case if the number of components indicated by the residual variance minimum does not meet the expected numbers of phenomena described by the data.
  • Validation produces a measure of both the modelling error and the prediction error.
  • the modelling error describes to what extent the X-model, on which the Y-matrix is based, is modelled, whereas the prediction error estimates the error that can be expected when the model is used for prediction.
  • Test set validation requires two sets of data, each being complete and having known X and Y values.
  • the two data sets should be similar to one another with respect to all sampling conditions. For instance, they must contain the same variables.
  • the variables should be obtained in the same way in both data sets, and the time span between the acquisition of the two data sets should not be too long. In short; the two data sets should have the same quality.
  • Test set validation represents the best way of validating a model, but requires access to sufficient data. If sufficient data for a test set validation is not accessible, a different method of validation must be performed, namely a cross-validation method. Cross-validation requires no extra test set, as the original data set is used by being divided into segments. Some of the segments thus obtained are used for modelling, while the remaining segments are used for testing.
  • One obvious method of segmentation is to divide the data into two halves, A and B.
  • Test Set Switch Another method of segmentation is to make as many segments as there are objects. In each validation one of the segments is used for testing and the others for modelling. If the segments each consists of one object only, this is called full cross-validation or Leave-One-Out validation. The squared difference between the predicted and the real Y value with respect of each omitted sample, is summed, averaged and presented as the prediction Y-variance.
  • segmented cross-validation can be used. For exapmle, the data set can be divided into 10 sub-sets, each segment containing 10% of the objects. Firstly a model is made based on all the segments except the first, then this segment is added to the data before a new model is made where the second segment is removed, and so on. This process is repeated until all segments have been treated.
  • the validation Y-variance is calculated as the average of the validation Y-variance of the individual segments.
  • cross-validation is not an independent test, as all the objects affect the error estimate and are also used in the model.
  • cross-validation is the best alterna ⁇ tive when a test set validation can not be performed.
  • Leverage correction is another possible method.
  • the leverage correction method uses the calibration data set also for validating the model. This leads to a quick, but (very) optimistic result.
  • leverage correction is useful in the early stages of the modelling phase where the main object is to get an overview of the data swarm and detect any outliers.
  • leverage correction is not recommendable as a final validation method, and here leverage correction validation is not used for any end modelling.
  • the development of a PLS model starts with plotting the X and Y matrices to get a first impression of the data structure. Matrices with different structures and proportions may need various forms of pre-processing. The minimum pre-processing performed is usually auto scaling and centering. The next step is to develop the initial model. Whether it should be a PLS1 or a PLS2 model, depends on the specific problem and obviously on the number of Y-variables available. Leverage correction may be used for this initial model due to the fact that it is just a temporary model. The t-u-scores plot will show the regression. If regression is good, the objects will lie close to a straight regression line.
  • Potential outliers will be placed more or less orthogonal to the regress ⁇ ion line. Objects which represent extreme values are found at the ends of the regress ⁇ ion line. If an object places itself far out in, for instance, the t1 u1 , t2u2, and the t3u3 score plot, then this object may be a potential outlier, and should possibly be removed. The removal of outliers should always be done with careful attention by checking against an experiment log book, seeking the cause of the objects' behaviour. Watch out for clusters of objects. If clusters emerge as distinct groups in the t-u score plot, this indicates that these groups represent different phenomena that perhaps better should be analysed separately.
  • PLS1 models are developed for every single Y variable. These models should be made from the original raw data with the required detection and removal of outliers. Some ⁇ times the t-u scores plot shows an unlinearity, even after the outliers have been remov ⁇ ed. If this is the case, it is likely that the model will benefit from a linearising transforma ⁇ tion of the variables. Possible transformations may be logarithmic, exponential, squared root, etc. If the variable indirectly represents a physical quantity, a solution could be to transform the variable back to unit of measurement of that physical quantity.
  • the last step is always to make the final PLS model validation proper; test set or cross-validation then being the correct validation method.
  • the optimal number of PLS- components for the model can be found in the validated predicted residual Y-variance plot. If more PLS-components than the optimal number thereof are included in the model, the model will be over-fitted, and noise will be modelled.
  • a Pread/Meas plot shows predicted Y values plotted against the measured, or referenc- ed Y values. This indicates how well the final model will predict.
  • RMSEP Root Mean Square Error of Prediction
  • a loading plot shows the relation between X and Y variables, and if the variables are located on the same side of an axis, they are positively correlated. Variables residing on different sides of an axis are negatively correlated. The distance along one of the axes from origo to one specific variable represents the impact of this variable on the outcome in Y space.
  • Tests were carried to examine whether a particular assembly of equipment and software is able to detect changes caused by cavitation, i.e. defects in an impeller; and to decide whether such changes were detectable all over the pump, or more easily detectable at some particular point.
  • One * .DAT file containing a continuous time domain signal sampled with the above mentioned parameters and saved as a matrix 512 columns wide and 100 rows deep, the duration then being approximately 1 second.
  • the first test performed was labelled a, the second b, and the different test-point locations labelled 1 ,..,7 respectively.
  • An object labelled a6g1 was also included in the X-matrix. This object was sampled with a gain of 1 , whereas all other objects were collected with a gain of 2.
  • Figure 10 shows the data matrix plot of the pre-opening data structure.
  • the plot contains 15 objects (lines), and 256 frequency variables (columns). As can be seen from the figure, the first half of the columns is significantly higher than the rest. This difference indicates that scaling is necessary.
  • a test involving induced damage to the impeller was conducted, the purpose of the test being to establish whether or not it is possible to discriminate acoustically between different induced damages, or between a worn impeller and a new impeller. Another goal is to check if a PCA performed on the X matrix alone will be able to discriminate between the differences, if any, as well as a PLS1-discrim.
  • the X matrix consists of spectra from four principal situations respectively having an old impeller, impeller damage 1 , impeller damage 2, and finally a new impeller installed, all measurements being performed with a new wear-ring in the pump housing In all four situations measurements were taken at the 7 test-points mentioned earlier
  • the X matrix then consists of 28 objects and 256 variables The objects are labelled w1 ,..,w7, a1 ,.,a7, b1 ,..,b7 and n1 n7, respectively, indicating the worn impeller, damage 1 , damage 2 and the new impeller
  • the PLS1 -d ⁇ scr ⁇ m matrix contains one variable designed to quantify the damage development due to normal wear and the induced damage, as well as the effect of installing a new impeller Such a quantification is not easy to establish For instance, the effect of removing a piece of metal from the impeller will induce new effects due to unbalance, changed flow-patterns and turbulence in the impeller, amongst other effects
  • One possible way to achieve such discrimination is to use the different weights of the impeller masses It is possible to express the masses in use in the different situations as "weight reduction" relative to the new impeller mass
  • Another, and preferred, quantification method is to give all situations involving the new impeller the value of 0, the naturally worn impeller the value of 1 and the two damages the values of 3 and 4, respectively.
  • This semi-quantification wear/damage index includes both severe wear and representative damages
  • Another possible Y vector design would be a "weight reduc ⁇ tion" relative to the new impeller mass, the PLS1-descr ⁇ m Y matrix then consisting of 28 objects and 1 va ⁇ able ranging from 0 to 4.
  • Figure 13 shows a data matrix plot of the X matrix used in the impeller wear- and damage analysis.
  • the standard X matrix procedure involving scaling and centering is necessary. Leverage correction is used as the validation method for this model, as the prediction performance is of no interest It is the Y-guided decomposition of the X data that is of interest
  • the residual X-va ⁇ ance plot shown in Figure 14 indicates that 3 PCs would be optimal as they would explain approximately 83% of the variance i X.
  • Test-point 6 is located on the opposite side of the pump housing in the pump's vertical centreline and between test-points 2 and 3
  • the triangle representing the second damage overlaps both the new impeller triangle and the worn impeller triangle.
  • a Principal Component Analysis (PCA) showed that the X matrix did not contain enough structure to discriminate clearly between the four experimental set-ups. Then, from the PCA analysis, it can only be concluded that PCA alone cannot discriminate effectively between the four set-ups.
  • the arrow in the PLS1 -discrim t1t2 plot shown in Figure 16 represents the evolution of the acoustic spectra due to the four different situations examined.
  • a new impeller gradually is worn there will be a movement mainly to the right in the plot.
  • Introducing a damage by breaking off a part of the impeller makes the new spectra jump upwards in the plot.
  • This sudden change in position indicates that a new situation has emerged - very unlike the situation due to normal wear.
  • Removing yet another part of the impeller does not represent a new situation; it merely manifests itself as "another damage" moving to the right.
  • Figure 17 As to the question of judging some test points as being “better” than others, it is referred to Figure 17, in which arrows inserted in the t1t2 plot indicate the displacement due to wear and damage for different sensor locations. As the measuring points 2, 3, 4 and 6 are situated directly on the pump housing, they are the primary ones to be considered. Hence, in Figure 17 arrows are drawn, for instance from n2 to w2 and from n2 to a2, to indicate the displacement due to wear and damage as recorded in this particular position, i.e. postion 2.
  • sensor position 2 All three sensor positions can be used for wear and damage monitoring, but sensor position 2 is to be preferred if only one sensor is to be used for monitoring normal running conditions.
  • Figure 19 shows the positions of the different spectra for three different test-points on the connecting piping.
  • Point 1 is located on the pressure side of the pump, between the non-return valve and the shut-off valve, and point 5 is located just outside the shut-off valve at the suction side of the pump, while point 7 is located closer to the pump, on the inside of the suction shut-off valve. From Figure 19 it can be seen that at all test-point locations outside the pump housing changes in the acoustic signal due to wear and damage, is detectable.
  • Location 1 outside the discharge shut off valve, is not particularly suitable for the detection of normal wear, but is among the best for the detection of a developing damage.
  • the best locations for monitoring normal wear outside the pump housing are locations 5 and 7.
  • locations 5 and 7 are better suited than location 1.
  • the loading weights plot from the PLS1 -discrim, shown in Figure 20 is useful. Variables ranging from 35 (5.5 kHz) to 210 (32.8 kHz) represent the most important frequencies, except for component number two, which has a distinct maximum at 34 kHz.
  • the figure shows positive loading weights for PC1, i.e. important variables, in a frequency range spanning from approximately 5.5kHz to 32.8kHz, and all frequencies have almost the same impact on PC1. Frequencies that are important to PC2 show a more complex pattern.
  • the plot shows that frequencies spanning from DC to 6.25 kHz are almost only contributing with positive loading weights to PC2.
  • Figure 21 is an PCA loading plot showing that it is not the same variables that contribute to both PCA and PLS1 -discrim.
  • the PLSI- discrim loading weights plot it is verified that almost the same variables contribute in the same way to principal component 1 in both cases.
  • Principal component 2 does not show the same pattern, as it has a peak of great positive loading weights around 34 kHz in the PLS1 -discrim.
  • the tendency is the opposite for the same frequencies, in that loadings are reduced in this range.
  • cavitation test The purpose of a cavitation test is to check whether, or not, it is possible to predict the degree of cavitation in a cavitating pump. If possible, this would perhaps enable an operator to avoid cavitation by throttling the suction side of the pump. Bringing an operating pump into cavitation is easily done by gradually throttling the pump's suction valve. Throttling of the suction valve does not immediately introduce cavitation. Cavitation will not appear until the restriction in the suction pipe lowers the suction pressure so much that vaporisation occurs in the vicinity of the impeller. The acoustic emission due to early cavitation is not readily heard by the human ear, because it is drowned by the normal pump noise.
  • the main sea water pump is equipped with a lever operated wafer type butterfly valve.
  • the lever can be set in ten different positions, ranging from fully open (0% cavitation) to totally closed (100% cavitation).
  • Running a pump with the suction valve closed is not particularly interesting. Doing so would, in this case, cause the cooling water pressure to fall below the stand-by pump's starting pressure.
  • the pump suction valve cannot be closed more than 80% (the lever being set at position 8) without having a stand-by pump start.
  • Each of the seven measuring locations were analysed separately and location number 6 turned out to be the best. The following analysis is therefore focused on measuring location 6 only.
  • the data set consists of a X matrix of 256 variables containing spectra originating from measuring location 6 and a Y matrix consisting of the five lever positions (3, 5, 6, 7, 8 corresponding to 30%, 50%, 60%, 70% and 80% closed, respectively) giving the different degrees of cavitation.
  • the six objects are named 3, 5, 6, 7, and 8, their names directly reflecting the degree of cavitation.
  • Cavitation was modelled using full cross-validation and the usual centered and scaled X matrix. From the validated residual Y-variance plot (not shown) and the accompanying validated explained X-variance plot (not shown), it can be found that one PLS compo ⁇ nent explains 58% of the variance in the Y-space, using only 7% of the X variance. The large difference between the two percentages does not represent a problem and can thus be quite adequate. The model does not need more of the X-space to model the phenomenon described by PLS component 1. This is also in accordance with the data analysis rule: One component describes one phenomenon, and ,in this case, one phenomenon is to be described, namely cavitation.
  • a Pred/Meas plot (not shown) demonstrates that the prediction is not satisfactory, as the distance along the regression line between points 5 and 6, and also between points 7 and 8 is small No point lies on the regression line and the perpendicular distance from each point to the line is quite large
  • the frequency range of 50 to 180 (7.8 - 28 kHz) is relatively "noiseless”, so this frequency span is probably the one to sept ⁇ gate more thoroughly when looking for frequencies particularly suited for the description of cavitation
  • acoustics it is possible to use acoustics to discriminate between levels of cavitation, but this model is not able to predict the different cavitation levels.
  • the engine tested is a four stroke engine (manufactured by Krupp MaK Machinenbau GmbH Germany, 1981), the shaft output of which being 3550 kW, and the rotational speed 375 rpm or 6.25 1/s.
  • the engine is connected to a variable pitch propeller and a shaft generator. Dunng all tests the engine ran idle on gas oil
  • the fuel valve a spring-loaded needle valve
  • the performance of the valve governs to a large extent parameters vital to the combustion process
  • a defect valve, or a valve exhibiting bad performance will always result in higher fuel consumption, due to a non- optimal combustion, and may also cause severe damage to the piston and or liner.
  • a good fuel valve opens at a specific, predetermined pressure. It atomises and distributes the fuel well and evenly under varying load conditions and instantaneously closes totally at the end of the fuel injection period. If these criteria are not met, the valve is not performing well. Performance can be thought of as a function of drift in opening pressure, atomisation and ability to stay closed. Under normal operation, these paramet ⁇ ers are very difficult to monitor and if performance could be detected and predicted acoustically, a great achievement would have been made.
  • the valve labelled o1 in the test was an "OK" valve taken “off the shelf” among spare and overhauled valves. It operated well at the specified opening pressure and was given grade one. After being tested in the engine, this valve was taken out and the opening pressure lowered from 260 bars to 160 bars. This lowering of the opening pressure would result in a really bad performance due to early introduction of the fuel into the combustion space. The low tension in the valve spring could also probably induce an uneven fuel cut-off at the end of the fuel injection period, causing after-burn and carbonisation.
  • This fuel valve was given the grade five (i.e. the worst) and was labelled c in the test.
  • the object labelled d in the test is the same valve as was used in the previous tests, labelled o1 and c, but now the opening pressure was raised to 360 bars.
  • Fuel valves are manufactured with an extremely high degree of precision, which implies that they are sensitive to impacts in their nozzle region.
  • the nozzle can easily be damaged when putting the valve down on its nozzle.
  • This often happens because the crew does not think of, or even worse: does not know the implications .
  • This situation was tested by giving the nozzle of the next test valve a gentle blow with a small hammer. This valve is labelled f in the data set and its performance was rated three.
  • the X matrix consists of 11 objects and 256 variables from the acoustic spectra
  • the Y matrix contains one variable describing the performance of the eleven fuel valves tested.
  • the eleven objects and their respective performance grading used as the Y matrix, are presented in Table 2 below.
  • Fuel valves having good performance are grouped together on the left side of t2, while the faulty ones are located to the right. This indicates that PLS component 1 describes mainly performance. An object moving to the right in the plot, looses performance. Interpreting the phenomenon in the direction of PLS component 2 is not obvious, and is therefore left for later studies. Indeed this is "how the acoustics see the world" (isolated).
  • Table 3 shows the resulting objects to be used for modelling after averaging.
  • Objects a1 and a2 have been combined to a single object called Orig, and objects e10, o1 , r1 , r2 and r3 are combined into one object called Ok.
  • the X matrix then consists of 6 objects and 256 variables, while the Y matrix contains a six element performance vector.
  • PLS component 1 explained more than 80% of the variance in Y. Since the number of objects was almost halved in this new model, it is reasonable to say that no more than one component should be used in this analysis.
  • Full cross- validation was used with matrices centered and scaled.
  • Figure 24 shows predicted values plotted against measured values of performance. Even as few as six points give a relatively good prediction. If the prediction had been 100% successful, all points would have been placed on a straight line with a slope equal to one. As can be seen, this is not the case, but a slope of 0.8 and a correlation of 0.9 is quite satisfactory. This plot demonstrates that it is possible to predict the performance of a fuel valve by the use of acoustics alone (i.e. by using a suitably calibrated PLS model, of course).
  • the purpose of the main engine bearing test is to:
  • - Collect spectra from the different main bearings for comparison in a score plot.
  • the accelerometer has to be physically in contact with the bearing, whereas the signal acquisition unit and the computer is located outside the crank case. This minor problem was overcome during signal acquisition by using a thin Teflon coated coaxial cable passing through a little passage made in the crank case door packing.
  • the accelerometer used for this purpose was a Kistler accelerometer, type 8702B25, with a measuring range of ⁇ 25 g and a sensitivity of 201 mV/g, the resonant frequency of the accelerometer being 54.0 kHz.
  • the accelerometer In order to get the best possible signal, the accelerometer had to be placed as close to the bearing shell as possible. Both the accelerometer and its cabling had to be fastened in a secure and reliable way, so that it would not loosen during the test run. If it loosen ⁇ ed, the accelerometer would get lost or even worse: impose a damage on the engine itself.
  • the following solution was chosen: A special stud was glued to a spare main bearing bolt nut, the accelerometer screwed onto the stud and secured with a nylon strip. Similar positions were used on all bearings in this test.
  • the data set consists of nine objects and the usual 256 frequency variables.
  • the different objects are numbered from 1 to 9, the nurnoer directly reflecting the correspond ⁇ ing location of the bearing in the engine.
  • Bearing number one is located in the driving end of the engine.
  • No Y matrix was included in the data set.
  • Bearing 1 is located closest to the gear, bearing 5 and 4 are considered to have the heaviest loads, and bearing 9 is close to the lubricating oil gear-pump and the vibration damper.
  • PC1 or PC2 describes, more information is needed. The information needed may be found in different maintenance reports or logs concerning this specific engine. For instance, it is possible that an answer can be found by studying the crankshaft deflection. Without such information, which was looked for, more detailed interpretation of the phenomena along PC1 or PC2 will not be possible at this stage.
  • All three sensor positions can be used for wear and damage monitoring, but sensor position two is preferable, if only one sensor is to be used for monitoring normal running conditions.
  • a "Chief-in-a-Box” i.e. an electronic unit incorporating the necessary interface for the reception of signals from various kinds of sensors.
  • the signals may be processed, analysed and presented in real time by integrated processing and a user-friendly interface for the presentation of the results.
  • an engineer may add all the knowledge and capabilities included in the "Chief-in-a-Box" to his own knowledge and experience, and hence increase safety and save costs.
  • Hydroelectric plants for example, may be a future application area of equipment made in accordance with the invention, being capable of foreseeing, warning and ultimately preventing disasters from taking place.
  • the invention may be used to monitor the density of wood being cut, for example, to auto- matically adjust the cutting speed, or to monitor the saw blade wear to indicate when the blade needs to be mended or replaced.
  • AMCM On- or off-line prediction of condition and performance of intemal components in machinery such as:
  • Fuel valve performance 2. Bearings; scoring and increasing friction

Abstract

The present invention relates to a method of detecting and processing acoustic signals emitted from reciprocating, oscillating or rotating objects for the purpose of recording and predicting changes in the condition of the objects. The invention also relates to an apparatus adapted to carry out the method. In particular, the method is useful in the on-line or off-line prediction of conditions and performance of internal components in machinery, so as to allow for so-called condition based maintenance, for example.

Description

ACOUSTIC CONDITION MONITORING OF OBJECTS
Technical Field
The present invention relates to a method of detecting and processing acoustic signals emitted from resiprocating, oscillating or rotating objects for the purpose of recording and predicting changes in the condition of the objects. The invention also relates to an apparatus adapted to carry out the method. In particular, the method is useful in the on-line or off-line prediction of conditions and performance of internal components in machinery, so as to allow for so-called condition based maintenance, for example, whereby no repair needs to be done before measurements and analysis of a component indicate that replacing the component really is necessary.
Background Art
It is know several methods and devices designed to assist in checking whether a certain piece of machinery is functioning properly, or not. For example, US Patent no. 5 361 628 describes a system for processing test measurements collected from an intemal combustion engine for diagnostic purposes. The system described is intended for cold- testing of newly manufactured engines. The engine being tested is cranked at a prescribed speed via an external motor, with no fuel supply or ignition. Various kinds of process data, such as oil pressure, inlet and outlet pressure, and torque diagrams, are used as input data to the analyses. It is mentioned that data for the analysis may also be entered from diagnostic sensors, such as acoustic sensors. Since a specific rotational speed is required, the known method is based on so-called trigging, and the process of filtering and pre-processing of acquired signals comprises various "equalizing" measures, including data reduction, mean-weighted mean, and AC removal, the reason being that the information is found directly in the timesignal. Principal Component Analysis (PCA - described below) is used to "observe" the data in a more surveyable form, before the PCA results, during the subsequent classification and analysis, are used as input data to neural networks and other classification methods. The use of neural networks facilitates that fault conditions not being classified before, can be detected and classified. Furthermore, the prior art method is capable of modelling non-linearitities in the mass of data.
Form US Patent no. 5 040 734 it is known a method of determining during milling, physical properties, such as particle size, density and volume, of the material within a mill. In the publication, it is mentioned use of a microphone (being non-directional, operating at low frequencies) as well as of acoustic emission (AE, ususally directional, operating at higher frequencies), but only frequencies within the audibel band (50 Hz to 10 kHz) are utilized. Nevertheless, the main task is to "hear" the particle distribution during milling, to allow the milling process to be stopped when the desired distribution is reached and, in addition, to obtain mass density and volme information. This latter method does not tell anything about the condition of the machinery itself, i.e. the mill, but is entirely devoted to product control. Even so, the method makes use of acoustics, anaiogue-to-digital conversion, frequency transformation and Principal Component Analysis (PCA) or Partial Least Squares (PLS - also described below) to provide the desired information.
The present invention, however, relates primarily to the task of quantifying the condition of single components in an operating machinery, in such a manner that the result is useful for monitoring, maintenance and control purposes. To this end, it is a commonly known technique to carry out the following four processing steps:
1. Data acquisition
2. Signal conditioning
3. Signal analysis
4. Presentation of results
In principle, in step 1 , data may be acquired from various sources. In the context of the present invention, however, it is focused on acoustic signals obtained from the object to be monitored by acoustic sensors, such as acoustic emission sensors, accelerometers, speed or position meters, microphones, etc. Even indirect sensors measuring vibration by means of laser light reflection, may be used. To achive better discrimination in the final calibrated model of a phenomenon, it may be advantageous to include relevant data of the process monitored, in addition to the acoustic signature. For a machinery, such data may be the number of revolutions, fluid flow, temperature, and power, for example. The data to be incorporated will depend on the present situation, and their relevance will be found from loading or loading weight plot of the final calibrated model.
In respect of step 4, the result should be presented to the end user in an a readily com¬ prehensible format, such as a column in a bar chart displayed on a screen, for example, directly indicating the reduction in performance or deterioration of a condition as a percentage compared to a fresh or new component, or some other reference. Also, the result may be present to the user by simple means, such as three coloured light signals, for example, green light representing normal operation, yellow light indicating a develop¬ ment or change in condition or performance which is undesirable, or soon needs attention, and red light indicating a condition or performance to be immeditately correct¬ ed to avoid breakdown. The interpretation of the results of the analysis carryied out in step 3 above, so far required years of experience when employing prior art techniques, as they often took the form of water fall charts or power spectra based on FFT analysis. With the analysis method according to the invention, however, the results may be presented in a more easily understandable manner.
In step 2, the signals received from the transducer(s) employed are usually digitised prior to further processing. During the process of collecting signals, the signals have to be sampled under such circumstances and at a sufficiently high rate that aliasing is avoided and the shape of the signal curve is best preserved. The over-sampling should be as high as practically possible, that is in the range of 2 to 60 times higher than the highest signal frequency having significance to the phenomenon monitored. With a sufficiently high over-sampling, the problem of aliasing can be avoided without the use of low-pass filters, the raw signals in stead being digitally filtered and possibly resampled at a lower frequency.
In general, minimum length of the time signal depends on the circumstances, as the signal must be of such a length that the frequencies characterising the phenomenon to be monitored are included an adequate number of times to be picked up. As the process according to the present invention is capable of finding correlations between a plurality of variables at the same time, this minimum length requirement is reduced as compared to methods whereby single frequencies only are decisive in respect of the result. In fact, tests have shown that time series of a duration of 0.5 to 10 sec. cover most applications.
Within the field of the present invention, there are various kinds of instruments commer- cially available which are especially dedicated for condition monitoring, including accelerometers, displacement probes, acoustic emission sensors, vibration exciters, different kinds of software and complex systems. One example of such a product is SKF Condition Monitoring M800A which is a programmable machinery monitoring system that monitors, for example, radial vibration, axial position, temperature and speed of large screw compressors. Mainly the prior art systems focus on monitoring individual critical components of a complex machinery, i.e. components such as bearings and shafts, or to be more specific; radial or axial displacement, misalignment, damaged gears, temperature, or general mechanical looseness. Also, several signal analysis techniques like frequency analysis, enveloping, crest factor and sound intensity are frequently used. Such methods of analysis do not, however, extract all the available information inherent in the time and frequency domains.
Hence, an object of the present invention is to provide a method and apparatus which make better use of the information inherent in detectable signals emitted from working machinery, and the like; in particular, to facilitate condition based maintenance and operation of such machinery.
Disclosure of Invention According to the invention a method is provided, of detecting and processing acoustic signals emitted from resiprocating, oscillating or rotating objects to record and predict changes in the condition of the objects, the method comprising detecting and recording different types of signals emitted from said objects and having varying amplitude, wavelength or frequency, and processing said recorded signals mathematically, the result of said mathematical processing being treated further by means of multivariate calibration so as to obtain information about the condition of said objects.
The mathematical processing preferably involves carrying out a Fast Fourier Transform (FFT), or that an Angular Measuring Technique (AMT) is employed, to obtain resulting spectra for immidiate or later processing by means of said multivariate calibration, in order to separate said conditions of the objects from one another.
According to a preferred embodiment, the method comprises the step of, in advance, producing a suitable multivariate model, preferably based on empirical date of the ideal condition of the objects, so as to allow the result of said multivariate calibration to be used to determine where the object is positioned between said ideal condition and a state of breakdown. The method according to the invention can especially be used:
- to determine a change in the acoustic image detected from a turbine and caused by a change in the position of the turbine rotor, for example, thereby enabling prediction of an approaching breakdown, - for continuous condition analysis, diagnostics and/or optimisation of operation of single objects or complete devices, such as engines, motors, generators and separators of various kinds,
- to detect formation of scaling, deformations or reductions in mills, or to determine the rate of crushing or particle size in milling or grinding equipment, - to predict tension or stress in a ship's rudder suspension or hull structure,
- to imitate an engine operator's use of his senses, to predict, in a reproducible manner, changes in conditions appearing gradually in single components and complex machinery.
According to an aspect of the present invention, one or more of a variety of methods may be employed for the mathematical signal processing when the general signal conditioning has been carried out, such methods include the methods of Gabor, Wavelets, and Wiegner-Ville, in addition to Fast Fourier Transform (FFT) and Angular Measuring Technique (AMT), the latter two only being discussed in detail below. In fact, even a so-called Multi-Way Analysis involving a plurality of variables in three or more dimensions, can be applied.
Basically, the method of the invention utilizes signals in the frequency band ranging from DC to MHz to characterise certain conditions of an object; that is, signals in the whole acoustic spectrum can be used, and normally no specific frequency range has to be selected; the effect being that, as the information is collected from a wider range of frequencies, the characterisation also is improved.
The method makes it feasable to construct instruments that are capable of simultane- ously quantifying the condition of one and more components of different kinds comprised within the same piece of machinery. This is because of the efficient methods of frequency transformation and phenomenon characterisation being used, as the invention also permits the use of transformation methods (such as Gabor, Wavelets, and Wiegner- Ville) simultaneously taking account of both the frequency and the time domain, so-called joint time-frequency methods. This approach according to the invention produces a signal fed to the multivariate model which contains more, and partly more diverse, information about the phenomena to be quantified; hence, enabling a better discrimination and isolation of the various phenomena, and, therefore, a more correct quantification thereof. For example, the Gabor method would give a better and more robust characterisation of the base signal, with more degrees of freedom, than traditional Fourier transforms.
With the method according to the invention both acoustic signals and process data are made use of at the same time. In fact, the method allows for a reduction in the number of sensors needed at the present to perform certain control tasks. Furhermore, experi- ence already gained with respect of a certain installation or piece of machinery can be recorded in an experience data base, and by connecting the output of the multivariate analysis to such a data base, it is possible to establish a system whereby quantified condition data is considered for the purpose of specifying specific measure to be carried out on the basis of the data and condition changes recorded. In fact, by incorporating the skills of professionals into such a system, savings are expected in respect of educating personnel, at the same time as more relevant and accurate measures are ensured, more quickly than before. Actually, the skills of the operator may be reduced at the same time as the efficiency and operating life of the monitored equipment increase.
Hence, the method accordning to the invention combines the best from well-proven sensor technologies with empirical calibration of virtually any type of instruments or signals. The method, therefore, does not rely on specific sensors, neither on one specific method of analysis. It is composed by a set of flexible options for the optimal combination of sensors and their accompanying pre-processing methods, which act as alternative, or complimentary, inputs for signal analysis and for multivariate instrument and signal calibrations. Adding the feature of multivariate calibration data analysis to these well known tasks, produces a new approach to condition based maintenance, this approach being named Acoustic Machine Condition Monitoring, or AMCM for short.
Brief Description of Drawings
Other objects and features of the present invention will appear from the description below, given by way of example only, and with reference to the accompanying drawings, on which:
Figure 1 is a block diagram of a typical data acquisition system, Figure 2 is a digitised sine wave with 3-bit resolution, Figure 3 shows a raw signal acquired from a pump, Figure 4 illustrates adequate (top graph) and inadequate (bottom graph) signal sam¬ pling,
Figure 5 shows different aliasing frequencies,
Figure 6 is a PCA scores plot showing the consequences of using different types of FFT parameters,
Figure 7 is loading plot of PC1 vs. PC2 (p1 p2),
Figure 8 is a first impression of comparable spectra,
Figure 9 is a flow chart of tests performed on a main sea water pump,
Figure 10 is a matrix plot of a pre-opening data structure, Figure 11 is a PCA score-plot showing the paired replicates of 7 test-points,
Figure 12 is a single vector plot showing the location of curves,
Figure 13 is a data matrix plot of the X matrix used in the impeller wear- and damage analysis,
Figure 14 is a residual X-variance plot, Figure 15 is a PCA score plot (t1t2) showing a partly successful discrimination between four experimental set-ups,
Figure 16 is a PLS1-discrim score plot illustrating wear as a movement to the right, and a damage as a movement upwards,
Figure 17 is a PLS1-discrim t1t2 plot including arrows indicating displacement due to wear and damage for different sensor locations,
Figure 18 is a t1t2 plot from PLS1-discrim with measurement points 1 , 5 and 7 forming triangles, each representing one wear or damage situation,
Figure 19 is a plot illustrating different positions of spectra from three measuring loca¬ tions on pump piping, Figure 20 is a loading weights plot from the PLS1-discrim,
Figure 21 is a PCA loading plot showing different contributions of the variables to PCA and PLSsl-discrim, respectively,
Figure 22 is a t1u1 score plot showing clear groupings,
Figure 23 is a t1t2 score plot from the PLS1 model showing the same tendencies as in the t1u1 plot of Figure 22,
Figure 24 is a diagram showing predicted values plotted vs. measured values of fuel injection valve performance, and
Figure 25 is a diagram showing the resulting PCA score plots when frequencies below 7.8 kHz and above 32.8 kHz are removed. Modes for Carrying Out the Invention
1. DATA ACQUISITION
Data acquisition systems (DAQ) based on computers, in particular personal computers (PCs), including suitable plug-in boards are used for a wide range of applications in the laboratory, in the field, and on the manufacturing plant floor. Such data acquisition boards of the general-purpose type are well suited instruments for measuring voltage signals. However, to measure analogue signals it is usually not sufficient just to wire the signal source lines to such a data acquisition board.
Figure 1 is a block diagram of a typical data acquisition system, and as can be seen from the figure, a typical data acquisition system consists of a transducer 2, which registers the physical phenomena 1 in question and converts the phenomena into more convenient form, for example a voltage or current signal 3. This signal is fed to a front- end pre-processing unit 4 for signal conditioning, before it is delivered to the PC data acquisition unit 5. This front-end pre-processing 4 is necessary, because many trans¬ ducers require a bias or excitation by current or voltage, bridge completion, linearisation, or high amplification for reliable and accurate operation. The integrity of the acquired data depends upon the entire analogue signal path.
1.1 The Transducer
The transducers most commonly used in such data acquisition systems convert physical quantities, such as temperature, strain, pressure and acceleration into electrical quan¬ tities, such as voltage, resistance or current. The characteristics of the transducer actually being used define the signal requirement of the data acquisition system. An example of such a transducer is the piezoelectric accelerometer which typically com¬ prises a slab of quartz crystal. Then, simply by squeezing it, a potential difference can be produced across the slab, but the crystal is not capable of a true DC response. Also, such piezoelectric elements will produce a charge only when acted upon by dynamic forces.
However, when a piezoelectric accelerometer is vibrated, forces proportional to applied acceleration act on the piezoelectric elements. The charge generated is then picked up by electric contacts. The piezoelectric element is characterised by an extreme linearity over a very wide dynamic range and frequency range. The sensitivity of a piezoelectric material is usually specified in pC/N (picoCoulomb per Newton) and the sensitivity of an accelerometer in mV/g. The frequency range can easily span 1 Hz to 25 kHz. Piezo¬ electric crystals are anisotropic, i.e. have different properties in different directions. Hence, the accelerometer exhibits directional properties which are characterised by a transverse sensitivity down to as low as 1 % of the reference sensitivity.
1.2 Signal Conditioning
Adequate signal conditioning equipment will improve the quality and performance of a system, regardless of the type of sensor or transducer being used, and typically the signal conditioning functions include functions such as amplification, filtering and isolation of any type of signals.
As transducers in general produce output signals of the order of millivolts or even microvolts, amplification of such signals directly on the data acquisition board would imply amplifying any noise picked up by signal wires and from within the computer itself. Then, if the signal is as small as microvolts, this noise can mask the signal itself, resulting in meaningless data. Hence, as excessive noise greatly will reduce the measurement accuracy of a PC-based data acquisition system, the amplification of a signal should take place outside the PC-chassis, preferably near the signal source, as this will effectively reduce the effect of noise and improve measurement resolution.
It is, however, favourable to boost the input signal to as much as possible such that the input range of the equipment is fully utilized. In order to reach the highest possible accuracy, the signal should be amplified, so that the maximum voltage range of the conditioned signal equals the maximum input range of the data acquisition board.
Then, if the noise level of the accelerometer is less than 40μV (RMS) (such as Brϋel &
Kjasr accelerometer type 4502), the smallest signal that can be recorded is 40 μV, giving a sensitivity of 0.040 ms"2, and if the input wires from the accelero- meter travel 10 m through an electrically noisy plant environment before reaching the data acquisition board, the various noise sources in the environment may possibly induce as much as 200 μV in the wires, and a noisy acceleration reading of about 5 ms"2 will result.
By amplifying the signal close to the source, before noise corrupts the signal, this problem is alleviated. Having the signal amplified to a gain of 800, by means of a signal conditioner placed near the accelerometer, this will produce an amplified accelerometer signal of 32 mV. When such a high-level signal travels the same 10 m, a noise of 200 μV coupled onto the signal path after signal amplification has a much less effect on the final reading, adding only a fraction (0.125%) of noise to the measured acceleration reading Hence, the analogue signal is boosted to well above the noise level before noise in the wires can corrupt the signal
An addition, unwanted signals should be removed from the analogue accelerator signal by means of a filter. In stead of employing noise filters which are often used to attenu- ate high frequency variations in DC-class signals, such as signals representing tempera¬ ture; for AC-class signals, such as signals representing vibration, a different type of filter, an antialiasing filter, is more often employed. Such antialiasing filters are low-pass filters with a very steep cut-off rate, almost completely removing all frequencies higher than a given frequency If such frequencies were not removed, they would erroneously appear as valid signals within the actual measuring bandwidth.
Furthermore, signal conditioning is often used to isolate the transducer signals from the computer for safety reasons. The system being monitored may contain high-voltage transients that could damage the computer. Another reason for implementing isolation, is to make sure that the readings from the data acquisition board is not affected by differences in ground potentials or common-mode voltages. If the input signal and the acquired signal are each referenced to "ground", problems will occur if there is a potential difference between the two "grounds" This difference will form a so-called ground loop, which may cause inaccurate representation of the acquired signal, or, if too large, may damage the measuring system. Isolation of the signal conditioner prevents most of these problems by passing the signal from its source to the measurement device without a galvanic or physical connection. Isolation breaks ground loops, rejects high common-mode voltages, and ensures that the signals are accurately acquired
As some transducers need to be excited or biased to work properly, the signal condu jn- ing unit usually generates excitation of such transducers. For example, strain gauges, thermistors, and accelerometers, require external supply of voltage or current excitation Vibration measurements are usually made with constant current source that converts the variation in resistance to a measurable voltage Some transducers, e.g. thermocouples, have a non-linear response to changes in the phenomena being measured. Therefore, linearisation also is often performed by the signal conditioning unit before the signal is fed to an analogue-to-digital converter included in the conditioning unit (see below).
1.3 Data Acguisition Variables
The product specifications of standard data acquisition board indicate features such as the number of channels, sampling rate, resolution, range, accuracy, noise, and non- linearity, all of which influence the quality of a digitised signal. Also, in respect of boards having single-ended and differential inputs, the number of analogue channel inputs is specified both for both kinds of inputs. Single-ended inputs are all referenced to a common ground point, and should be used when the input signals are high-level (higher than 1 V) signals, the cables from the signal source to the analogue input hardware are short (less than 3 m), and all input signals share the same common ground reference. If this is not the case, differential inputs should be used. With differential inputs, each input has its own ground reference. Noise errors are reduced because common-mode noise picked up by the wires are cancelled out.
To enable computerised processing of the amplified transducer signal the data acquisi- tion board or the computer itself comprises an analogue-to-digital converter (ADC) adapted to convert the input analogue signal to a digital value. It is favourable to boost the input signal to the analogue-to-digital converter as much as possible, so that the input range of this converter is fully utilised. The sampling rate of the converter determi¬ nes how often conversion is to take place. A faster sampling rate acquires more points in a given time span and thus can often create a better representation of the signal. In order to digitize a signal properly, the Nyquist Sampling Theorem states that sampling must be performed at minimum twice the rate of the maximum frequency component subjected to detection. A simple heuristic says that in order to recreate a signal properly after digitisation, the signal must be over-sampled at 8 to 10 times the rate of the maximum frequency component present.
In the context of analogue-to-digital conversion, some important aspects should be considered:
- To increase the number of channels, multiplexing is a commonly used technique whereby multiple channels are routed to a single analogue-to-digital converter. The analogue-to-digital converter then samples one channel, switches to the next channel and samples the next channel, switches to the following channel, and so on. Since the same analogue-to-digital converter is used to sample many channels instead of one, the effective rate of each channel is inversely proportional to the number of channels sampled. - The resolution is the number of bits that the analogue-to-digital converter uses to represent the analogue signal. The higher the resolution, the higher the number of input voltage divisions the range is broken into, and, therefore, the smaller the detectable voltage change will be. Figure 2 shows a sine wave and its corresponding digital image as obtained by an ideal 3-bit analogue-to-digital converter. A 3-bit analogue-to-digital converter (which is actually seldom used, but a convenient example), divides the analogue range into 23, or 8 divisions. Each division is repres¬ ented by a binary code between 000 and 111. Clearly, information is lost in the conversion, so a 3-bit code is not a good representation of the analogue signal. By increasing the resolution to 16 bits, the number of codes from the analogue-to-digital converter increases from 8 to 65536, and a very good representation of the original analogue signal is obtained.
- The range refers to the minimum and maximum voltage levels that the analogue-to- digital converter is able to quantify. Most analogue-to-digital converter boards offer selectable ranges, so that the board is configurable to handle a variety of different voltage levels. By changing the range, the analogue signal can be adapted to the range of the analogue-to-digital converter board, so that the signal is measured with maximum accuracy.
- The smallest detectable change in voltage is called the code width and is represented by the least significant bit of the digital value. The code width is a function of the range, resolution, and gain available on the analogue-to-digital converter board. The ideal code width is found by dividing the voltage range by the gain times two raised to the order of bits in the resolution. For example, an analogue-to-digital converter board of the type DAQ-Card 1200 available from National Instruments Corporation has a selectable voltage range of 0 - 10 V or ±5 V, 12-bit resolution, and a selectable gain of 1, 2, 5, 10, 20, 50, 100. With a voltage range of 0 to 10 V, and a gain of 100, the ideal code width equals 10/100x212 = 24.4 μV. Therefore, the theoretical resolution of one bit of the digitised value is 24.4 μV.
1.4 Measuring Referenced and Non-Referenced Signal Sources A grounded or ground referenced measurement system is one in which the voltage signal is referenced to the construction system ground. A grounded signal source is best measured with a differential or non-referenced measurement system. However, there are some pitfalls in using a ground referenced measuring system to measure a grounded signal source. The voltage measured will in this case be the sum of the signal voltage and the potential difference that exists between signal ground and the measure- ment system ground. This potential is generally not a DC level. Thus, the result is a noisy measurement system often showing power-line frequency components in the readings. Noise which is introduced by ground-loops may have both AC and DC components introducing offset errors as well as noise in the measurements. The potential difference between the two grounds causes a current - called the ground-loop current - to flow into the interconnection. A ground referenced system can still be used if the signal voltage levels are high and the interconnection wiring between the source and the measurement device has a low impedance. If this is the case, the signal voltage measured is degraded by the ground-loop, but the degradation may be tolerable. The differential (DIFF) or the non referenced single-ended (NRSE) input on a typical data acquisition board, both provide a non-referenced measurement system.
In a differential measurement system properly designed to measure a grounded signal source, any potential difference between references of the source and the measuring device appears as common-mode voltage to the measurement system, and is subtracted from the measured signal. However, steps are preferably taken to have non-referenced signal sources, i.e. a floating source and differential input configuration. In such an arrangement, resistors are included to provide a return path to ground for instrumenta¬ tion amplifier input bias currents.
When the transducer and the signal conditioning unit are battery operated, as in the preferred situation, the measurement is non- referenced. The signal is floating with respect to ground. Floating signal sources can be measured using both differential and single-ended measurement systems. If a differential measurement system is used, then care should be taken to ensure that the common-mode voltage level of the signal with respect to the measurement system ground remains within the common-mode input range of the measurement device. In the AC-coupled set-up shown, two bias resistors are connected between each wire and measurement ground. The bias resistors prevent the input bias current in the instrumentation amplifier from moving the voltage level of the floating source out of the valid range of the input stage of the data acquisition board. This serves to prevent offset errors as well as noise in the measurement. The two resistors included in such an arrangement, provide a DC path from the instrumentation amplifier input treminals to the instrumentation amplifier ground. Failure to use such resistors will result in erratic or saturated (positive full-scale or negative full-scale) readings. The two resistors must be of value large enough to allow the source to float with respect to the measurement reference, and not load the signal source, and small enough to keep the voltage within the range of the input stage of the analogue-to-digital converter board.
2. THE SIGNAL CONDITIONING UNIT
As can be seen from the above discussion, a signal conditioning unit ought to be versatile. It should handle many different types and makes of transducers. It should also be able to add gain to signals, or even attenuate signals from the transducer. Finally it must handle aliasing problems. Thus, the following criteria or parameters should be taken into account in the design of a signal conditioning unit:
- It should bias different types of accessible accelerometers and stress wave sensors, such as those provided by, for instance, Kistler and Brϋel & Kjaer.
- Its gain should be easy and precise to set by manual switches or by a controlling computer.
- It ought to be able to amplify the signal without distortion and within a reasonable bandwidth, - one thousand times. - It should contain an antialiasing filter.
- It should be battery operated, and with a battery capacity capable of at least three hours of continuous operation.
- Saturation or overloading of the filter input or the amplifier output should be indicated.
- It should contain a low-frequency amplifier capable of driving a headset. - For test reasons, it should contain a wave generator.
- It should contain a trigging device capable of powering an external proximity, or optical switch. The trigging device should be TTL compatible and deliver every pulse, or every second pulse, at choice.
- All settings, and output and input signals should be accessible through a 25 way 'D' connector, so that the conditioner can communicate with the analogue-to-digital converter board and the computer software.
- It should provide the necessary isolation for the system to operate safely.
- It should have (at least) two separate channels.
- It should be able to operate with differential, single-ended, referenced or non-referenc- ed inputs, and the input should be of high input impedance. - The output should be of low impedance and be able to source at least ±5 mA with a swing of ±8 V.
- The overall gain bandwidth product should be at least 100 MHz.
- It should be easy to operate and carry around, thus making it ideal for field opera- tions.
A suitable amplifier arrangement comprises one monolithic instrumentation amplifier PGA 202 available from Burr-Brown, having digitally controlled gains of 1 , 10, 100, and 1000, and connected to one model PGA 203 amplifier from the same supplier, providing gains of 1 , 2, 4, and 8. Both amplifiers have TTL or CMOS compatible inputs for easy micro¬ processor interfacing. As the two channels making up the signal conditioning or signal amplifying unit are identical, only one channel will be referred to in the following descrip¬ tion.
The amplifiers PGA 202 and PGA 203 both have FET inputs, which give extremely low input bias currents. Because of the FET inputs, the bias currents drawn through input source resistors have a negligible effect on DC accuracy. The picoamp currents produce merely microvolts through megohm sources. A return path for the input bias currents is provided through 1 Mohm resistors connected between the inputs and analogue ground (AGND). Without this return path, the amplifier could wander and saturate because of possible stray capacitance or any current leakage through the coupling-capacitors. These capacitors prevent the excitation of the transducer to reach the instrumentation amplifier inputs. Failing to do so will make the instrumentation amplifier saturate immediately.
The output stage of the amplifiers PGA 202 and PGA 203 is a differential trans- impedance amplifier with laser-trimmed output resistors to help minimise the output offset and drift. The rated output current is typically ±10 mA, when | Vout | < 10 V. The output impedance is 0.5 ohm. All power supplies to both instrumentation amplifiers are decoupled with 1 μF tantalum, and 0.1 μF and 1 nF ceramic capacitors. The capacitors are located as close to the instrumentation amplifier as possible for maximum performan¬ ce. To avoid noise, gain and CMR errors, the digital ground (DGND) and the analogue ground (AGND) are separated.
The first instrumentation amplifier is designed with an output offset adjustment circuit, while the second amplifier has both input offset and output offset adjustment circuits. The adjustment circuits are individually adjustable. The choice of buffering an operation¬ al amplifier is very important for the performance of the output offset adjustment circuit. In this arrangement the Burr-Brown OPA 602 is used, because of its low impedance and wide bandwidth. The offset adjustment controls are accessible from the front panel of the signal conditioner.
Gain selection is accomplished by the selection of a two-bit word to the gain select inputs, AO to A3, and gains can be selected from 1 to 8000 when having one PGA 202 and one PGA 203. With for instance two PGA 203 amplifiers cascaded, the gain would be selectable in 16 steps with a gain ranging from 1 to 64. The gain selection is set in two ways: either by setting the hardware switches on the front panel, or by the computer software, provided the hardware switches are set in "ON" position, reading the digital word "1111 ". The hardware switches directly reflect the gains selectable. On the single channel signal amplifying unit these signals should be "floating", and the switches placed in the intermediate position. The cut-off frequency of the output amplifier is set to 1 MHz. With a total system gain less than 1000, the gain will be constant over a fre¬ quency spectrum ranging from DC to 1 MHz. With system gains greater than 1000, the gain will be constant over a frequency spectrum ranging from DC to 100 kHz.
The signal conditioning unit is equipped with a 10 pole inverse Chebyshev filter. This filter is realised by the use of five cascaded UAF42 universal active filters available from Burr-Brown. The UAF42 is a monolithic integrated circuit (IC) which contains the operational amplifiers, matched resistors, and precision capacitors needed for a state- variable filter pole-pair. A fourth uncommitted operational amplifier is also included on the die. Usually active filter design and verification are tedious and time consuming. To aid in the design of their active filters, Burr-Brown provides a computer-aided filter design program under the trade name FilterPro. This program is used to design and implement the antialiasing filter.
The inverse Chebyshev filter type is chosen and implemented because this filter type has a flat magnitude response in the pass-band with a steep rate of attenuation in the transition-band. Ripple in the filter's stop-band and some overshoot and ringing in the step response, are undesirable but unavoidable. They are considered as being without significance in this application. Since the analogue-to-digital converter employed has a maximum sampling rate of 100 kS/s, the filter cut-off frequency is set at 47.0 kHz with a stop-band attenuation of -100 dB. Then the -3 dB frequency is 25.51 kHz, so that the whole audio frequency domain ranging from DC to 20 kHz will be unattenuated even when the filter is in use.
3. COLLECTING AND PROCESSING DATA ON-LINE
The last segment in the data acquisition system chain consists of the computer, the data acquisition board and the acquisition software. The performance of the named three components is crucial to the performance of the whole data acquisition system, Optimalisation can be achieved in several ways, with respect to price, weight, or processing speed. In the following versatility, cost-effectiveness and portability are given most consideration.
3.1 The computer Carrying out on-line collection, pre-processing, saving and display of data at a speed of 100 000 samples per second, requires a relatively fast and powerful computer. Also, the storage capacity must be large enough to store the necessary raw data until it can be saved on some other kind of mass storage. A time domain of 9.4 seconds sampled at 80 kS/s represents 750 000 individual values. Storing such values in a spread sheet format occupies 8.246 Mbytes. The computer used for this purpose is a Dell Latitude 90XPi which is a standard off-the-shelf lap top computer with a 90 MHz Intel Pentium processor, 40 Mbytes RAM and a 1.2 Gbytes removable hard disk. In addition, the computer is equipped with two PCMCIA slots suitable for National Instruments' DAQCard 1200.
Unfortunately, the international PCMCIA standard does not support Direct Memory Management (DMA). During data storage the main processor will be burdened with memory management in addition to its usual tasks. Consequently, simultaneous use of a maximum sampling rate, online viewing of long windowed Fast Fourier Transforms (FFTs), and storing on disk, will cause problems. The cyclic buffer will fill up, bringing the process to a halt, because the data can not be written to disk fast enough. How¬ ever, this can, at least to a certain extent, be overcome in software by giving disk shuffling the highest priority and by storing the data as binary files instead of ASCII files. 3.2 The data acquisition board
The National Instruments DAQCard 1200 is used for data acquisition. DAQCard 1200 is a low-cost, low-power, analogue input, analogue output, digital I/O card for personal computers equipped with a PCMCIA Type ll slot. The card contains a 12-bit, successive- approximation analogue-to-digital converter having eight inputs which can be configured as eight single-ended or four differential channels. The card also has 12-bit digital-to- analogue converters (DACs) having voltage outputs, 24 lines of TTL- compatible digital I/O and three 16-bit counter/timer channels for timing I/O. All these facilities are available through a 50-pin connector and cable which plugs directly onto the card.
The analogue input circuitry of DAQCard 1200 consists of two analogue input multi¬ plexers, mux counter/gain select circuitry, a software-programmable gain amplifier, a 12- bit ADC, and a 12-bit FIFO memory that is sign-extended to 16 bits, tn the present case, only a few of the facilities provided by the DAQCard are required for collecting the data. Here, only the ADC itself will briefly be discussed, because a certain knowledge of the operations of the ADC is important in order to understand the acquisition software.
The 12-bit resolution of the converter allows the converter to resolve its input range into 4096 different steps. The ADC input range is between ±5 V, or 0 to 10 V. With a gain equal to 1, this will give a resolution of 10/212 = 2.44 mV. When the A D conversion is complete, the ADC clocks the result to A D FIFO. This FIFO serves as a buffer to the ADC. The A/D FIFO can collect up to 1024 A/D conversion values before any informa¬ tion is lost, thus allowing software some extra time to catch up with the hardware. An error condition, called FIFO overflow, occurs if more than 1024 samples are stored in the FIFO before being read. This error will result in a loss of information and must be avoided.
The output from the ADC can be interpreted as either straight binary or two's comple¬ ment. Here, the DAQCard works in bipolar mode, and the data from the ADC is then interpreted as a 12-bit two's complement number having a range of -2048 to +2047. The output from the ADC is sign-extended to 16-bits, causing either a leading 0 or a leading FHex to be added, depending on the coding and the sign. Thus, data read from the FIFO are 16 bits wide.
A data acquisition operation refers to the process of providing one sequence of A/D conversions when the sample interval is carefully timed. The data acquisition timing circuitry consists of various clocks and timing signals that control the data acquisition, gate the data acquisition operation, and generate scanning clocks. Data acquisition operations are initiated either externally, or through software control. The data acquisi¬ tion operation is terminated either internally by the counter A1 of the 82C53(A) counter/- timer circuitry which counts the total number of samples taken during a controlled operation, or thorough software control in free-run mode. In a continuous data acquisi¬ tion operation, samples are taken at regular intervals without any delay. Therefore, the samples are each taken using the same sample time interval. This applies to data acquisition in both free-run or controlled operation.
3.3 The graphical programming environment
To establish a graphical programming environment a graphical programming software package developed by National Instruments Corporation, and called LabVIEW, is used to program the data collecting algorithms, as this programme has become an industry standard development tool for test and measurement applications. LabVIEW is a specially designed graphical programming system for data acquisition and control, data analysis, and data presentation. It offers a programming methodology in which software objects, called virtual instruments (Vis), are assembled graphically. Programming in LabVIEW means building Vis instead of writing programs, and in this respect it is different from other programming applications, such as C++ or BASIC. (Other program¬ ming systems use text-based languages to create lines of code, while LabVIEW uses a graphical programming language, G, to create programs in block diagram form. Lab¬ VIEW is a general-purpose programming system, but it includes libraries of functions and development tools designed specifically for data acquisition and instrument control.) LabVIEW Virtual Instruments (Vis) are similar to the functions of conventional language programs. Hence, a VI consists of an interactive user interface, a dataflow diagram that serves as the source code, and icon connections that allow the VI structure to be called from higher level Vis.
3.4 The Collect Data Virtual lnstrument (VI)
The VI front panel displayed on the computer screen consists of two graph windows together with controls and indicators for different hardware settings. Under normal use the upper graph window shows on-line the acquired time domain wave-form, while the second window shows a real time Fast Fourier Transform (FFT) of the acquired wave- form. Controls for setting sampling rate, frame size, number of averages to be used, and display settings, are also shown. The acquisition start button and the save-timefile button are located in the same area.
Hardware controls which are seldom in use, are hidden during normal runs. They can, however, be reached by activating scroll bars. The hardware controls are used for setting channel number, trigger type, scan clock, and threshold limits for analogue trigging.
The following criteria and design parameters form the basis for the development of the Collect Data Virtual lnstrument:
- The ultimate goal is to enable the user to decide which sampling parameters to choose for the best result, based on the actual sampling problem.
- The VI must be adaptive, reliable and easy to operate. The user interface shall be intuitive for a user with only little or some experience from data acquisition. - The VI must be able to display FFTs and time domain wave forms in real time.
- The VI must be able to apply different windowing techniques to the FFTs.
- The VI shall store the required amount of data to disk as a continuous stream of data.
- The VI shall write down a timefile capable of holding 5 to 10 seconds of continuos data sampled at highest speed. This timefile shall be organised as a matrix of stacked vectors each containing 50 000 samples. The number of vectors stacked beneath each other will then determine the length of time domain stored. For instance, 15 vectors would store 9.4 seconds, if the sampling rate is 80 kS/s.
- Data must be written to disk in such a manner, that the data is easily retrievable, preferably in a spread sheet format. - Record sampling parameters and comments must be written onto a separate file.
4. FFT-BASED SIGNAL ANALYSIS AND MEASUREMENT
The Fast Fourier Transform (FFT) and the power spectrum are powerful tools, which often are used for analysing and measuring signals collected by data acquisition equipment. Here some of the basic computational and fundamental issues needed to understand Fast Fourier Transforms shall be descibed. Also, antialiasing and questions related to acquisition front ends, will be discussed.
Directional operation accelerometers are often used to monitor acoustic emissions from rotating and reciprocating machinery. The signal from the accelerometer contains information about the component being monitored. Given that it is possible to character- ise this type of signals and then compare signals recorded at different periods of time from the same component of machinery, it should be possible to detect the condition and performance of that component. As an example, Figure 3 shows a plot of a raw signal from a pump. If a characterising signal of the pump is recorded when the pump is new and operates without faults, its acoustic spectrum can be compared with spectra recorded later.
Depending on the kind of tests or calibration having been carried out earlier, faults originating from specific components should now be detectable. It is often believed that the information in question is present within the audio frequency range (0 - 22 kHz). Raw signals are then sampled with a sampling frequency of 44 kHz, or more. This leads to a situation where relatively great amounts of data have to be stored. In addition, mutual comparisons of such spectra are in deed difficult to accomplish, so the classical method for characterising such spectra are by use of Fourier Transformations.
The Fourier Transformation maps time domain functions into frequency domain repres¬ entations and is defined as :
Figure imgf000023_0001
where x(t) is the time domain signal and X(f) is the Fourier Transform.
Similarly, the Discrete Fourier Transform (DFT) maps discrete time sequences into discrete frequency representations and is given by:
X^ ∑x.e~ "k'" far * = 0,1,2,...,« - 1 Eq 2
where x is the input sequence, X is its corresponding DFT, and n is the number of samples both in the discrete time and the discrete frequency domain.
Direct implementation of the DFT according to Eq. 2, requires approximately n2 complex operations. However, there are computationally efficient algorithms available which require approximately n log2(n) operations. Algorithms of this kind are called Fast Fourier Transforms (FFT).
A measurement based on Fast Fourier Transforms requires digitisation of a continuous signal. According to the Nyquist Criterion, the sampling frequency, Fs, must be at least twice the maximum frequency component in the signal. If this criterion is violated, a phenomenon occurs, known as aliasing. Aliasing is simply a misinterpretation of high frequencies as lower frequencies, as illustrated in Figure 4, in which the upper graph shows an adequate signal sampling rate, whereas the the lower graph shows an inadequate signal sampling rate. The lower graph demonstrates aliasing, interpreted as a low frequency signal, but the signal is fake and due only to a to low and inadequate sampling rate.
The two graphs shown should make it easier to understand what aliasing implies. For example:
1. In western films a cartwheel often appears to run backwards or slowly forward, because of the sampling involved in filming, relative to the rotation speed of the wheel.
2. The stroboscope is in fact an aliasing device which is designed to represent high frequencies as low ones. It has also the ability to represent zero frequency, i.e. to apparently "freeze" a picture.
When the Nyquist criterion is violated, frequency components above half the sampling frequency appear as frequency components below half the sampling frequency. The result will be an erroneous representation of the signal. For example, Figure 5 shows the alias frequencies appearing when a signal having real components at 25, 70, 160 and 510 Hz, is sampled at 100 Hz. Alias frequencies then appear at 10, 30 and 40 Hz. In Figure 5 dotted arrows indicate frequency components due to aliasing, while solid arrows indicate actual frequencies.
5. PRE-ANALYSING ROUTINES
In experiments performed onboard LPG/C vessel "Norgas Mariner" a considerable amount of data was produced. A total of 3 Gbytes of data was logged. Prior to analys¬ es, 220 individually recorded timefiles had to be processed. Making an FFT of a timefile containing 750 000 ciphers is very time-consuming, so finding the optimum way of processing the files in the programme unscrambler with respect of subsequent analysis, was vital. Looking for the maximum variance in a Principal Component Analysis (PCA) scores plot, a small pilot scale study was designed to find the window length to use for the FFT, and to decide whether the process of averaging every sample window was more advantageous than performing an FFT over the complete time domain file.
The effects of running with a defect pressure valve on a Sulzer cargo compressor was chosen as a realistic and suitable test. Figure 6 depicts PCA scores plot showing the consequences of using different types of FFT parameters. In the figure, the original valve is referred to by numerals b1 and b2, the defect valve by reference numeral d, and a refitted valve by reference numeral r. Suffixes 0, 1 , 2, and 3 denote the different types of technique used during the production of the FFT:
- 0 indicating a 512 points averaged FFT file saved by LabVIEW,
- 1 indicating a 1024 points FFT used on the first 50 000 points of a long timefile,
- 2 indicating a 50 000 points FFT applied on all the fifteen single vectors making up the timefile and the result averaged to 512 points, and
- 3 indicating use of the same FFT window, but now applied on just one single one of the vectors in the timefile before averaging to 512 points.
As can be seen from Figure 6, all samples having suffix 2 are located furthest out in the plot. The samples having 0 as the ending cipher are located closer to the plot centre. The outer connecting line in the plot represents samples with larger variance than the points connected by the inner line. The conclusion that can be drawn from this simple test is, that an FFT with a large window produced ovet a long time domain discriminates better than an FFT with a small window produced over a short time domain. Based on this result it was decided to use an FFT length of 215, or 32 768 samples, and a 50%, or 214 points, moving average over the total length of the timefile.
5.1 Making the FFTs
A Matlab function capable of performing such a task is the PSD routine found in the Matlab Signal Processing Toolbox. The PSD routine estimates the power spectrum of the sequence x using the Welch method of spectral estimation.
The function call used is; [Pxx.f] = psd(x,nfft,Fs,Window1nooverlap), where: - nfft specifies the FFT length that psd uses. The value of nfft determines the fre¬ quencies at which the power spectrum is estimated. - Fs is a scalar specifying the sampling frequency.
- Window specifies a windowing function and the number of samples psd uses in its sectioning of the x vector.
- nooverlap is a variable indicating the number of samples by which the sections overlap.
- Pxx is the vector holding the power spectrum returned, and
- f is the accompanying frequency vector.
Estimating 220 FFTs in an efficient way means constructing a well designed body for the PSD function. The purpose of this routine is to act as an interface between the user and the PSD function. The routine will ask for the necessary parameters and return two types of FFT files: One with the extension \LFT (long FFT) which contains as many points as the number of samples in the time domain file; the other, with the extension \RFT (reduced FFT), being the result of a moving average performed on the *LFT, thus producing an FFT with a user specified number of points. Also, a user specified cut-off frequency must be specified, and all frequencies above the cut-off limit are truncated. The routine will save and/or print a graph of each file containing a Reduced Fourier Transform (RFT). All results will be collected and saved (as an UNC-file) together with a file (UNN-file) containing the object names, - actually the file-names, in the order they were processed. The routine will ask for the sampling rate and the number of variables wanted in the reduced FFT-file. This routine assumes standard DOS file-names to be used. A full listing can be found in the appendix.
5.2 Displaying the FFTs A Show routine is used to produce plots of absolute values of the power spectrum magnitude over a 40kHz frequency range and over a 20kHz frequency range. This routine also produces plots depicting the same information, but with the power spectrum magnitude given in dB.
ln order to compare the different FFTs produced it is necessary to have a routine capable of displaying several graphs in the same window. The routine is using the LFT files, even though other file types can be displayed as well. It then performs a moving average over the 215+1 variables. The number of variables resulting from this averaging is user specified. The user is also able to specify the cut-off frequency, the file names to display and the sampling frequency to be used. 5.3 The Averaging Routine
The result of the Makeffts routine is composed of several \LNG-files, vectors, 32769 variables long, each variable representing an absolute value in the power spectrum. The averaging routine process these vectors and performs a user specified moving average over the frequency domain, and a new dB40-file containing one averaged vector for each initial LNG-file, is produced. The original absolute values of each vector are also converted to dB units by the following algorithm: X = 10 x LOG(X). The result is saved in the form of the previously mentioned dB40-file in ASCII format and is as the basis of the input to the data analysis programme.
6. MULTIVARIATE DATA ANALYSIS
It is a fact that "nature is multivariate", i.e. that a phenomenon usually depends on several factors. For instance, the weather depends on variables such as the wind, air pressure, temperature, season, etc.
The "health" of a complex piece of machinery, such as the main engine on board a ship, depends on the general maintenance performed, fuel oil quality, load, trade, operating environment, and so on. One specific property rarely depends on one single variable only, most problems are multivariate. Multivariate calibration or data analysis is a technique developed for optimal extraction of information from various forms of data by applying the necessary mathematical and statistical tools. Over the years many techniques have been developed and adopted by several professions. In short, multivariate data analysis is a technique of decomposing a matrix of raw data into two parts: a structural part and a noise part.
The structural part can later be interpreted as the part of the signal correlated with the phenomenon being studied. The rest of the raw observation matrix is considered to be "noise". This philosophy leads to a vital conclusion: It is the problem definition itself that determines which portion of the signal that is to be considered as being information, and which part simply being "noise".
The data matrix usually consists of p variables characterising n objects. If the purpose of the analysis is to reveal latent structures within the matrix itself, i.e. to find correlations between variables and relations between objects, a technique called Principal Compo- nent Analysis (PCA) is often used. If the purpose of the analysis tends towards estimat¬ ing a property and finding the variables correlating with the property, a different appro- ach, called Partial Least Squares Regression (PLS-R) can be used. Partial Least Squares Regression is often called Partial Least Squares, or PLS for short.
6.1 Principal Component Analysis Principal component analysis is a technique for decomposing a data matrix, X, to a varying number of orthogonal components. PCA is used in order to decompose the p-dimensional data swarm into a number of A new dimensions or Principal Components (PCs). Principal component 1 represents the direction of maximum variance, and principal component 2 represents the second largest variance, orthogonal to principal component 1. Principal component 3 is orthogonal to principal component 2, and represents the third largest variance. This process is repeated until the number of principal components equals the p dimensions of the original data swarm. It is a fundamental assumption in this kind of analysis that the direction with maximum variance is more or less (directly) connected to any latent structures in the raw matrix. Further- more, usually only a few of the new principal components are used in the analysis, so the method actually represents a major reduction of dimensions. As a centre for this new system of principal components, the average point given by Equation 3, is used.
Lx, ,x2 ,...,x I where x ~, - = J■ Σ~l *.
P) rt Eq. 3
To ensure that all variables are compared by the same footing, a form of scaling is often needed. Some variables could, for example, be measured in Pascal, others in grams/- kW, making very uneven variations. Misinterpretation due to no scaling can be prevent¬ ed by dividing each column of an X-matrix by the inverse of the variables' standard deviation, for example, such as shown in Equation 4.
Xj Xi,STD = Ec^ 4
STD (Xj)
Principal component analysis is based on the assumption that a raw data matrix X, can be split into the sum of a matrix product, TPT, and a residual matrix E. Equation 5 below illustrates the centred principal component model. X = TPτ + E Eq. 5
One important aspects of PCA is that the matrix product TPT consists of the score matrix T and a transposed loading matrix P. The Score-matrix, T, contains row-wise co-ordinat- es for each object. Score and loading plots are two central concepts of principal component analysis. A score plot is simply score vectors plotted against each other, visualising the "foot-prints" of the original objects in the X matrix. Scores visualise the co-ordinates of the objects in the new system defined by the principal components. A score plot will therefore show how objects relate to one another. Score plots are said to be "a map of object relationships". Similarly, by plotting the loading vectors of the loading matrix against one another, loading plots will be produced. A loading plot shows the direction of each principal component in relation to the original co-ordinate system and will therefore also show how variables relate to one another. The columns of a loading-matrix, P, contains the co-ordinates for each variable and represent the trans- formation-matrix between the X-space and the PC-space.
Figure 7 depicts a loading plot of PC1 vs. PC2 (p1 p2) and shows an example of a loading plot where V1, ...,V6 are different variables. The figure indicates that variables V1 , V2, V3 are interrelated, or positively correlated, in some way, as they are grouped closely together. Variables V4 and V5 also appear to be related to one another, but they are certainly not positively correlated to V1 ,V2 and V3, as the distance between the two groups is almost the largest possible. On the other hand, the distance from origo to each of the two groups is almost identical, and both groups are located very close to the X-axis. The two groups of variables thus identified use different signs to describe the same phenomenon, i.e. they are negatively correlated.
Outlier detection and control are important aspects of all data analysis. Outliers are atypical variables or objects. If the outliers result from an erroneous measurement, they must be removed. Failing to do so will corrupt the model. On the other hand, if they represent some important phenomenon, the model will suffer severely if they are removed. If removed, the model will be unable to explain a property that is, in fact, present in the data. Identifying and removing outliers may be difficult, time-consuming, and requires experience.
In addition, two other important factors should be mentioned before leaving the principal component analysis, namely the concept of modelling variance, and the number of principal components to be used in a given model. The E-matrix or the matrix containing the unmodelled "noisy" part of X, can be thought of in terms of "lack of fit". It contains parts of X which has not been explained or taken care of by the model TPT. The modelled variance shows the part of X which was taken care of by the model, and from a plot (not shown) of modelled variance it can be found that one principal component models approximately 55% of the total variance. Two principal components would be able to model approximately 77% of the total variance.
The optimum number of principal components chosen for a model is found by studying the total residual variance plot. In this case, such a plot (not shown) indicates that a clear break point is present after principal component number 3. It seems as if the next principal component does not have that much more "to bite into". The total residual variance plot evens out, and the gain per additional principal component is significantly smaller than before. Since the direction of the maximum variance is more or less directly connected to the interpretable latent structures in the raw matrix, it follows that "large" principal components may be correlated with the information sought for, whereas "smaller" principal components represent noise and thus are mainly, or totally, irrelevant to the problem. Bearing in mind that the "correct" number of PCs to be use to describe a phenomenon is problem dependent, this case shows that three principal components would be adequate for modelling this specific problem.
6.2 Partial Least Squares Regression (PLS)
The method of partial least squares regression goes further than principal component analysis. PLS includes a new matrix, Y, which consists of the dependent variables, whereas the X-matrix contains the independent variables. The idea is to let the depen¬ dent variables Y interact with the independent variables X, while the PLS-components are established. A PLS-component is the Y-guided analogue of a principal component in an X to Y regression sense. In this way it is possible to establish a regression model for the relation between X and Y. Such a model can then be used for later prediction. If the model is calibrated in the right way, and there is indeed a connection between X and Y in the data, it is possible to find a new Y from a given X, or predict Y from X.
This is close to the basic idea of the present invention: If there is a correlation between the "noise" emitted from a specific machine and the condition of the machine, by way of PLS, the "noise" can be used as variables in the X-matrix to predict the condition of the machine as the variable in Y. PLS may at first be regarded as "individual" PCAs performed on the X-matrix and on the Y-matrix, respectively, but with the very important difference that in the case of a PLS, the two models are not independent. The structure of the Y-matrix is used as a guideline while decomposing the X-matrix to determine the model. With PLS the idea is not to describe as much as possible of the variation in X, but to seek that part of X that is most relevant for the description of Y.
Two types of PLS exist: PLS1 and PLS2. They differ in that a PLS1 models only one Y-variable, whereas a PLS2 can model several Y-variables. When using PLS, it is important to study the score vectors t (from the X-space) and plot them against the score vectors u (from the Y-space). In comparison, with PCA different score vectors are plotted against one another in the form of a t-t plot. The t-u plots from the PLS model are used to spot outliers as well as to indicate regression and linearity. Erroneous measurements can often be seen in the t-u plot as objects laying far off the "main stream". If such objects are not removed, they can cause unwanted distortion and twist in the model. If a PLS model describes an objective correlation between the X and Y-spaces, plotting t vs. u will yield a linear relationship. The final model consists of the score-matrices T and U, matrices that are linearly related with a coefficient b in the manner indicated in Equation 6:
u = bt + e Eq. 6
where, in this case:
- e represents a residual, and
- b is termed "the inner regression relationship" between u and t and is used for the calculation of subsequent factors, if the intrinsic dimension of X is greater than one.
To perform prediction on an unknown sample, the following algorithm may be used: y = tτβ + y, where y is the mean value of calibration data Y-values, and 6 is the estimated relationship between the scores of X and Y in the model.
6.3 Loadings and Loading Weights
During modelling, several matrices are generated: The score matrices T and U men¬ tioned above, and, in addition, two sets of X-loadings, P and W, and one set of Y- loadings, Q. The P loadings are to be interpreted in the same way as in PCA, except for the fact that they are calculated by PLS. Just as in PCA, they express the relation between the raw-data matrix X and its scores (t). The difference between PCA and PLS is due to the guidance from the Y-space while decomposing the X-space in PLS. The loading weights, W, are the effective loadings directly connected to the relationship between X and Y and a result of the inner regressi¬ on. In other words, the difference between P and W is an expression of how much the Y-guidance has influenced on the decomposition of X.
6.4 Validation
The purpose of validation is to make sure that the derived model will be valid for future predictions using new data of a similar character. The goal of calibration is to derive a model having the best prediction ability. Then, it is natural to let the results from the validation decide the number of PLS-components to use when modelling. The number of PLS components giving the lowest predictive Y-residual variance is selected, but if the residual variance plot shows a clear break before the bottom point, this may indicate that fewer components should be used in the modelling. This is especially the case if the number of components indicated by the residual variance minimum does not meet the expected numbers of phenomena described by the data.
Different types of validation can be performed on the model. Which one to chose, depends greatly on the availability of data for validation. Validation produces a measure of both the modelling error and the prediction error. The modelling error describes to what extent the X-model, on which the Y-matrix is based, is modelled, whereas the prediction error estimates the error that can be expected when the model is used for prediction.
Test set validation requires two sets of data, each being complete and having known X and Y values. The two data sets should be similar to one another with respect to all sampling conditions. For instance, they must contain the same variables. The variables should be obtained in the same way in both data sets, and the time span between the acquisition of the two data sets should not be too long. In short; the two data sets should have the same quality.
One of the data sets, the calibration set, is used to make the model. The other, the validation set, is used for model testing only. The technique implies comparison of the Y variables predicted by the model with the measured Y variables in the test set. Test set validation represents the best way of validating a model, but requires access to sufficient data. If sufficient data for a test set validation is not accessible, a different method of validation must be performed, namely a cross-validation method. Cross-validation requires no extra test set, as the original data set is used by being divided into segments. Some of the segments thus obtained are used for modelling, while the remaining segments are used for testing. One obvious method of segmentation is to divide the data into two halves, A and B. Then, one model is made of A and tested against B, and afterwards another model is made of B and tested against A. The total prediction error is then found as the mean of the two prediction errors from modelling A and B. This method is called Test Set Switch. Another method of segmentation is to make as many segments as there are objects. In each validation one of the segments is used for testing and the others for modelling. If the segments each consists of one object only, this is called full cross-validation or Leave-One-Out validation. The squared difference between the predicted and the real Y value with respect of each omitted sample, is summed, averaged and presented as the prediction Y-variance.
When many objects have to be taken into consideration, or the selection of representa¬ tive objects for model testing poses a problem, then segmented cross-validation can be used. For exapmle, the data set can be divided into 10 sub-sets, each segment containing 10% of the objects. Firstly a model is made based on all the segments except the first, then this segment is added to the data before a new model is made where the second segment is removed, and so on. This process is repeated until all segments have been treated. The validation Y-variance is calculated as the average of the validation Y-variance of the individual segments.
Note that cross-validation is not an independent test, as all the objects affect the error estimate and are also used in the model. Anyway, cross-validation is the best alterna¬ tive when a test set validation can not be performed.
Leverage correction is another possible method. The leverage correction method uses the calibration data set also for validating the model. This leads to a quick, but (very) optimistic result. However, leverage correction is useful in the early stages of the modelling phase where the main object is to get an overview of the data swarm and detect any outliers. However, leverage correction is not recommendable as a final validation method, and here leverage correction validation is not used for any end modelling. 6.5 Developing a PLS model
The development of a PLS model starts with plotting the X and Y matrices to get a first impression of the data structure. Matrices with different structures and proportions may need various forms of pre-processing. The minimum pre-processing performed is usually auto scaling and centering. The next step is to develop the initial model. Whether it should be a PLS1 or a PLS2 model, depends on the specific problem and obviously on the number of Y-variables available. Leverage correction may be used for this initial model due to the fact that it is just a temporary model. The t-u-scores plot will show the regression. If regression is good, the objects will lie close to a straight regression line. Potential outliers will be placed more or less orthogonal to the regress¬ ion line. Objects which represent extreme values are found at the ends of the regress¬ ion line. If an object places itself far out in, for instance, the t1 u1 , t2u2, and the t3u3 score plot, then this object may be a potential outlier, and should possibly be removed. The removal of outliers should always be done with careful attention by checking against an experiment log book, seeking the cause of the objects' behaviour. Watch out for clusters of objects. If clusters emerge as distinct groups in the t-u score plot, this indicates that these groups represent different phenomena that perhaps better should be analysed separately.
PLS1 models are developed for every single Y variable. These models should be made from the original raw data with the required detection and removal of outliers. Some¬ times the t-u scores plot shows an unlinearity, even after the outliers have been remov¬ ed. If this is the case, it is likely that the model will benefit from a linearising transforma¬ tion of the variables. Possible transformations may be logarithmic, exponential, squared root, etc. If the variable indirectly represents a physical quantity, a solution could be to transform the variable back to unit of measurement of that physical quantity.
The last step is always to make the final PLS model validation proper; test set or cross-validation then being the correct validation method. The optimal number of PLS- components for the model can be found in the validated predicted residual Y-variance plot. If more PLS-components than the optimal number thereof are included in the model, the model will be over-fitted, and noise will be modelled.
A Pread/Meas plot shows predicted Y values plotted against the measured, or referenc- ed Y values. This indicates how well the final model will predict. In a perfect model offset and RMSEP (Root Mean Square Error of Prediction) are exactly zero and all the objects will be on a straight line with a slope equal to 1. This is of course impossible in statistical modelling, but the degree to which a particular model actually is close thereto, is a very good measure of the model's predictive abilities. Such a plot is in fact easy to comprehend, even for non-professionals.
A loading plot shows the relation between X and Y variables, and if the variables are located on the same side of an axis, they are positively correlated. Variables residing on different sides of an axis are negatively correlated. The distance along one of the axes from origo to one specific variable represents the impact of this variable on the outcome in Y space.
7. DATA ANALYSIS
Prior to all data analyses, it is wise to check the data at hand. Since data may be acquired through accelerometers of different brands and from widely differing sources, such as a centrifugal pump and a ship main engine, it is natural to visually compare FFTs from different sources.
In Figure 8, three frequency spectra are shown; one spectrum obtained from a ship main sea water pump and another spectrum obtained from a ship engine fuel injection valve, both acquired through a first accelerometer (Brϋel & Kjaer 4396), and one ship engine main bearing spectrum acquired through a second accelerometer (Kistler 8702B25). By comparing the spectra shown, a stunning resemblance appeared at 28 kHz, near the resonant frequency of the first accelerometer, as can be seen from the figure. All three spectra show similar pronounced peaks in the frequency range of 33 kHz to 34 kHz.
It seems highly improbable that spectra from such varied sources, and acquired by two completely different accelerometers, should group together at the same mounted resonant frequency as that (28 kHz) of one of the. accelerometers, when in fact the other accelerometer has a different mounted resonance frequency, specified to be 54.0 kHz. Usually mounting leads to a reduced mounted resonance frequency in the range of zero to five kHz typically caused by variations in the mounting media. Therefore, the cause of the peaks located in the range between 32 to 35 kHz must be sought elsewhere, - perhaps in the Signal Conditioning Unit. 7.1 Testing a Main Sea Water Pump
Tests were carried to examine whether a particular assembly of equipment and software is able to detect changes caused by cavitation, i.e. defects in an impeller; and to decide whether such changes were detectable all over the pump, or more easily detectable at some particular point.
An initial test was performed to find the best accelerometer positions, and proper settings of gain, sample rate and window length. The initial test resulted in an extension of the originally five test points to seven. Sampling was performed with a gain of two, a sample rate of 50 kS/s, a frame size of 512 samples and 100 averages. The test showed a fairly stable FFT at all measuring points.
For each test performed, four different types of files were recorded:
1. One *.DAT file containing a continuous time domain signal, sampled with the above mentioned parameters and saved as a matrix 512 columns wide and 100 rows deep, the duration then being approximately 1 second.
2. One *.FFT file containing an average FFT of the 100 individual FFTs produced by each window in the time domain signal recorded in the *.DAT file.
3. One *.LNG file containing a continuous time domain signal sampled at 80 kS/s, with a frame 50.000 samples wide and 15 lines deep, this timefile representing approxi¬ mately 9.4 seconds of the time domain signal and having a size of about 8 Mbytes.
4. One \PAR file containing user comments and the recording parameters used for the present record.
The main test sequences carried out are schematically illustrated i Figure 9.
Prior to any work on the worn pump unit, two pre-opening spectra and a cavitation spectrum were collected at all seven test points. The pre-opening test was performed as two independent replicate measurements taken with an interval of 1.5 hours. The intention of the pre-opening test is to verify that two measurements taken within a reasonable interval do not differ too much. In a PCA score-plot corresponding test- points should be located close together. If not, this would indicate an unstable experi¬ mental set-up which in turn would give reason to doubt the validity of the end results. Secondly this pre-opening test could act as a baseline reference for the rest of the experiments performed on the pump. Preferably all changes of worn parts with respect to the new ones should be indicated in the different score-plots by the new points moving away from the points given by the pre-opening test.
The first test performed was labelled a, the second b, and the different test-point locations labelled 1 ,..,7 respectively. An object labelled a6g1 was also included in the X-matrix. This object was sampled with a gain of 1 , whereas all other objects were collected with a gain of 2.
When a new set of data is to be analysed, the the raw data must be looked at. In this way the analyst will have a possibility to decide whether to use auto-scaling or not, as well as get an impression of the data structure. Figure 10 shows the data matrix plot of the pre-opening data structure. The plot contains 15 objects (lines), and 256 frequency variables (columns). As can be seen from the figure, the first half of the columns is significantly higher than the rest. This difference indicates that scaling is necessary.
With all data centred and scaled, and by employing leverage correction as a validation method, a PCA produced the result shown in Figure 11. As can be seen from the figure, the two corresponding replicates of the 7 test-points are grouping nicely in pairs indicat¬ ing that two separate spectra taken at 1.5 hours interval do not differ much at any test-point. There is, however, one exception in that object a6g1 is clearly anomalous. Object a6g1 should have been located in the vicinity of a6 and b6. The reason for this behaviour of this one particular object is that the object was erroneously recorded with a gain of 1 while the rest of the objects were recorded with a gain of 2. Also, in the single vector plot shown in Figure 12, this difference is easily observed, as, in this plot, the spectra of objects a6, b6 and a6g1 are plotted together. In the plot, object a6g1 is labelled 3, it is easily verified that its curve, at all frequencies, lies below the two other objects labelled 1 and 2.
A test involving induced damage to the impeller was conducted, the purpose of the test being to establish whether or not it is possible to discriminate acoustically between different induced damages, or between a worn impeller and a new impeller. Another goal is to check if a PCA performed on the X matrix alone will be able to discriminate between the differences, if any, as well as a PLS1-discrim.
The X matrix consists of spectra from four principal situations respectively having an old impeller, impeller damage 1 , impeller damage 2, and finally a new impeller installed, all measurements being performed with a new wear-ring in the pump housing In all four situations measurements were taken at the 7 test-points mentioned earlier The X matrix then consists of 28 objects and 256 variables The objects are labelled w1 ,..,w7, a1 ,.,a7, b1 ,..,b7 and n1 n7, respectively, indicating the worn impeller, damage 1 , damage 2 and the new impeller
The PLS1 -dιscrιm matrix contains one variable designed to quantify the damage development due to normal wear and the induced damage, as well as the effect of installing a new impeller Such a quantification is not easy to establish For instance, the effect of removing a piece of metal from the impeller will induce new effects due to unbalance, changed flow-patterns and turbulence in the impeller, amongst other effects One possible way to achieve such discrimination is to use the different weights of the impeller masses It is possible to express the masses in use in the different situations as "weight reduction" relative to the new impeller mass Another, and preferred, quantification method is to give all situations involving the new impeller the value of 0, the naturally worn impeller the value of 1 and the two damages the values of 3 and 4, respectively. This semi-quantification wear/damage index includes both severe wear and representative damages Another possible Y vector design would be a "weight reduc¬ tion" relative to the new impeller mass, the PLS1-descrιm Y matrix then consisting of 28 objects and 1 vaπable ranging from 0 to 4.
Figure 13 shows a data matrix plot of the X matrix used in the impeller wear- and damage analysis. As can be seen from Figure 13, the standard X matrix procedure involving scaling and centering is necessary. Leverage correction is used as the validation method for this model, as the prediction performance is of no interest It is the Y-guided decomposition of the X data that is of interest The residual X-vaπance plot shown in Figure 14 indicates that 3 PCs would be optimal as they would explain approximately 83% of the variance i X.
Figure 15 is a PCA score plot of PC1 and PC2 (t1t2) showing a partly successful discrimination between the four experimental set-ups n = new, w = worn, a = damage 1 , b = damage 2. In the plot a line connecting three points, 2, 3 and 6, corresponding to physical test-points situated directly on the pump housing, has been drawn for each of the experimental set-ups Test-point 6 is located on the opposite side of the pump housing in the pump's vertical centreline and between test-points 2 and 3 As can be seen from the figure, there is a discrimination between the worn, the new and the first imposed damage on the impeller. On the other hand, the triangle representing the second damage overlaps both the new impeller triangle and the worn impeller triangle. A Principal Component Analysis (PCA) showed that the X matrix did not contain enough structure to discriminate clearly between the four experimental set-ups. Then, from the PCA analysis, it can only be concluded that PCA alone cannot discriminate effectively between the four set-ups.
Performing a PLS1 means that the Y matrix will directly guide the decomposition of the X-space. The effect of this interaction is clearly shown in Figure 16 as a much improved discrimination relative to the PCA score plot of Figure 15. Now, in Figure 16, all four situations are now completely separated. The goal of this analysis is to show that there are detectable acoustic differences that can discriminate between the four situations. The reason for this success is undoubtedly due to the fact that in the latter model all knowledge is used by introducing the new Y-vector.
In this case, two PLS components explains 66% of the variance in the X-space and 75% of the variance in the Y-space. Furthermore, in this case, the PLSI-discrim clearly discriminates better than the PCA model, even though it uses less of the variance in X. These findings form the basis of the decision of basing further analysis on the PLS1- discrim.
Now, some interesting questions as to what can be deduced, are as follows:
- How does wear and damage appear in a score plot?
- Can some of the test points on the pump housing be singled out as significantly better than others?
- Are the effects of wear and damage detectable all over the pump? Even on the connecting pipes?
- Which variables, or which frequencies, contribute the most to the result?
The arrow in the PLS1 -discrim t1t2 plot shown in Figure 16, represents the evolution of the acoustic spectra due to the four different situations examined. When a new impeller gradually is worn there will be a movement mainly to the right in the plot. Introducing a damage by breaking off a part of the impeller, makes the new spectra jump upwards in the plot. This sudden change in position indicates that a new situation has emerged - very unlike the situation due to normal wear. Removing yet another part of the impeller does not represent a new situation; it merely manifests itself as "another damage" moving to the right. From this plot one can tell that development in wear or damage will move the positions of the spectra to the right, along t1 , whereas a change caused by a sudden damage will cause the spectra to move upwards in the plot, along t2. In short, wear appears as a movement to the right, and a damage appears as a movement upwards in the plot. Thus, the plot demonstrates, in effect, very clearly that acoustics in wear and damage monitoring has arrived.
As to the question of judging some test points as being "better" than others, it is referred to Figure 17, in which arrows inserted in the t1t2 plot indicate the displacement due to wear and damage for different sensor locations. As the measuring points 2, 3, 4 and 6 are situated directly on the pump housing, they are the primary ones to be considered. Hence, in Figure 17 arrows are drawn, for instance from n2 to w2 and from n2 to a2, to indicate the displacement due to wear and damage as recorded in this particular position, i.e. postion 2.
Assuming that the direction of t1 indicates normal wear and tear, and that the direction of t2 indicates a sudden damage, the following can be deduced by examining the plot and Table 1 below indicating relative movement of spectra co-ordinates in percent and mm : - Normal wear: Sensor position 2 on the pump housing outlet gives the largest indica¬ tion, namely 100%. Sensor position 3 and 6 would give an indication which is approxi¬ mately 18% less. Position 4 is to be avoided, if a normal wear of the impeller is to be monitored, as this indication is only one quarter of the indication in position 2. - A sudden damage: Sensor position 3 should be chosen, since the relative displace- ment in this measuring point is 58% and 11% greater than in measuring point 2, resepectively, depending upon whether one or two principal components are taken into account. Position 4, 129%, seems to be almost as good as position 6 and better than position 2. Bearing in mind that the "sudden damage" was induced on the impeller inlet, close to measuring point 3, this result is no surprise. - A developing damage: Removing the second part of metal from the impeller may be regarded as a developing damage. Sensor position 6, 147%, indicates such a situation in the best way. Sensor position 3 will provide only one quarter of the indication of that of sensor position 2, and only 16% of the indication, if compared to position 6. Table 1
Location of sensor on
Relative movement of spectra coordinates in mm pump housing
From - To Vert.(t1) Hor.(t2) % of pos. 2 Diag.(t1,t2) % of pos. 2 n2-> w2 35 100 % 37 100 % Outlet n3-> w3 29 83 % 30 81 % Intlet n4-> w4 8 23 % 9 24 % Bearing n6-> w6 28 80 % 31 84 % Intermediate n2-> a2 24 100 % 46 100 % Outlet n3-> a3 38 158 % 51 111 % Intlet n4-> a4 31 129 % 35 76 % Bearing n6-> a6 32 133 % 38 83 % Intermediate a2-> b2 38 100 % 41 100 % Ouϋet a3-> b3 9 24 % 12 29 % Intlet a4-> b4 41 108 % 39 95 % Bearing a6-> b6 56 147 % 34 83 % Intermediate
A preliminary conclusion regarding sensor positioning on the pump housing seems to be:
- Sensor position 2 to be used when monitoring normal wear.
- Sensor position 3 to be used when looking for sudden damages.
- Sensor position 6 to be used when monitoring a developing damage.
- All three sensor positions can be used for wear and damage monitoring, but sensor position 2 is to be preferred if only one sensor is to be used for monitoring normal running conditions.
As to the question whether changes can be detected all over the pump, an answer can be found by performing the same analysis, but this time by in connection with locations 1 , 5 and 7 situated on the inlet and outlet pipes of the pump. The result is shown in Figure 18 which is a t1t2 plot from PLS1 -discrim, and where measurement points 1, 5 and 7 form triangles, each of which representing one wear or damage situation. The plot shows the same pattem as when measuring directly on the pump housing. The discrimination, however, is not as good as before. Especially the regions representing the new and the worn impeller are close together.
Figure 19 shows the positions of the different spectra for three different test-points on the connecting piping. Point 1 is located on the pressure side of the pump, between the non-return valve and the shut-off valve, and point 5 is located just outside the shut-off valve at the suction side of the pump, while point 7 is located closer to the pump, on the inside of the suction shut-off valve. From Figure 19 it can be seen that at all test-point locations outside the pump housing changes in the acoustic signal due to wear and damage, is detectable. Location 1 , outside the discharge shut off valve, is not particularly suitable for the detection of normal wear, but is among the best for the detection of a developing damage. At location 5, outside the suction shut off valve, both wear and sudden damage are easily detected, while a developing damage is hard to detect. Location 7 singles out to be the best of the three locations, since a sensor positioned at this location produces spectra containing enough information about both wear, sudden damage and damage develop¬ ment.
This leads to a preliminary conclusion :
- The best locations for monitoring normal wear outside the pump housing are locations 5 and 7.
- All three locations can be used for sudden damage monitoring; however, locations 5 and 7 are better suited than location 1.
- Only locations 1 and 7 are suitanle for monitoring damage development.
- Location 5 is the most adequate for normal wear monitoring, if only one sensor is to be used outside the pump housing.
To determine which variables or frequencies contribute the most, the loading weights plot from the PLS1 -discrim, shown in Figure 20, is useful. Variables ranging from 35 (5.5 kHz) to 210 (32.8 kHz) represent the most important frequencies, except for component number two, which has a distinct maximum at 34 kHz. The figure shows positive loading weights for PC1, i.e. important variables, in a frequency range spanning from approximately 5.5kHz to 32.8kHz, and all frequencies have almost the same impact on PC1. Frequencies that are important to PC2 show a more complex pattern. The plot shows that frequencies spanning from DC to 6.25 kHz are almost only contributing with positive loading weights to PC2. Then, there is a switch to negative loading weights in a frequency range spanning from 6.25 to 10.9 kHz. The maximum peak loading weight of PC2 is located at 34 kHz. Variables containing frequencies below 5.5 kHz appear to be more "noisy", their weights fluctuating a lot.
Since both PC1 and PC2 are needed for wear and damage monitoring, at least variables (X-variables 60 to 200) found in the marked area of Figure 21 ought to be used. The contribution from lower frequencies may not be significant for the discrimination, but this has to be checked by removing such variables from the X matrix and then make a new model. If there are no significant differences between the two PLS1 -discrim plots, such variables can safely be removed.
Figure 21 is an PCA loading plot showing that it is not the same variables that contribute to both PCA and PLS1 -discrim. By comparing this plot to that in Figure 20, the PLSI- discrim loading weights plot, it is verified that almost the same variables contribute in the same way to principal component 1 in both cases. Principal component 2 does not show the same pattern, as it has a peak of great positive loading weights around 34 kHz in the PLS1 -discrim. In the PCA loading plot, the tendency is the opposite for the same frequencies, in that loadings are reduced in this range. These and the other differences between each variable's contribution to the different loadings indicate which variables contribute the most to the better discrimination of the PLS1 -discrim.
7.2 Cavitation Test When unloading a gas carrier, it is important to know when, or if, the deep-well pump is starting to loose its fluid. If fluid loss occurs, the consequence could be that a significant amount of liquid gas is left in the tank because of the difficulties in connection with re-establishing a stable flow pattem through the pump. This situation is most likely to occur during the stripping process and, if occurring, it will represent a substantial loss to the ship owner, due to loss of cargo, and the time consuming vaporisation process involved in the removal of the gas. Usually fluid loss in a pump of this kind is encounter¬ ed when the suction pressure is gradually lowered. At a specific pressure, the liquid will gradually start to vaporise as the suction pressure is lowered. Then the pump has entered the stage of cavitation. The end result of this process is that the pump, not designed for pumping vapour, to a large extent only "sees" vapour on its suction side and fluid loss has occurred. The operator avoids this situation by gradually closing the pump's discharge valve. This action reduces the fluid flow through the pump, and thus ensures an adequate suction pressure. By throttling the discharge valve, a trained, able and cautious operator is able to avoid cavitation, stripping the tank totally. However, there is one major question to be asked: When to start throttling?
The purpose of a cavitation test is to check whether, or not, it is possible to predict the degree of cavitation in a cavitating pump. If possible, this would perhaps enable an operator to avoid cavitation by throttling the suction side of the pump. Bringing an operating pump into cavitation is easily done by gradually throttling the pump's suction valve. Throttling of the suction valve does not immediately introduce cavitation. Cavitation will not appear until the restriction in the suction pipe lowers the suction pressure so much that vaporisation occurs in the vicinity of the impeller. The acoustic emission due to early cavitation is not readily heard by the human ear, because it is drowned by the normal pump noise. However, a fully developed cavitation adds so much noise to the pump sound, that it easily can be detected by the human ear. On the suction side, the main sea water pump is equipped with a lever operated wafer type butterfly valve. The lever can be set in ten different positions, ranging from fully open (0% cavitation) to totally closed (100% cavitation). Running a pump with the suction valve closed is not particularly interesting. Doing so would, in this case, cause the cooling water pressure to fall below the stand-by pump's starting pressure. In the present experiment, the pump suction valve cannot be closed more than 80% (the lever being set at position 8) without having a stand-by pump start. Each of the seven measuring locations were analysed separately and location number 6 turned out to be the best. The following analysis is therefore focused on measuring location 6 only.
The data set consists of a X matrix of 256 variables containing spectra originating from measuring location 6 and a Y matrix consisting of the five lever positions (3, 5, 6, 7, 8 corresponding to 30%, 50%, 60%, 70% and 80% closed, respectively) giving the different degrees of cavitation. The six objects are named 3, 5, 6, 7, and 8, their names directly reflecting the degree of cavitation.
Cavitation was modelled using full cross-validation and the usual centered and scaled X matrix. From the validated residual Y-variance plot (not shown) and the accompanying validated explained X-variance plot (not shown), it can be found that one PLS compo¬ nent explains 58% of the variance in the Y-space, using only 7% of the X variance. The large difference between the two percentages does not represent a problem and can thus be quite adequate. The model does not need more of the X-space to model the phenomenon described by PLS component 1. This is also in accordance with the data analysis rule: One component describes one phenomenon, and ,in this case, one phenomenon is to be described, namely cavitation. On the other hand, one may argue that there is a second phenomenon also involved, namely the transportation of energy from the cavitation process through the fluid, the metal and into the sensor. Using this argument, two PLS components should be considered and they will then describe 70% of the Y variance using 19% of the X variance. From the resulting t1u1 plot (not shown) it can be found that different degrees of cavitation is modelled, since the points are placed in the correct sequence along the regression line. The plot also indicates that the points lie on a slightly S-curved line This S-curve represents a problem, as it cannot be removed by pre-processing or transformations. The only way to remedy this problem is by collecting more samples, e.g by sampling every 5% increase in the Y-space.
A Pred/Meas plot (not shown) demonstrates that the prediction is not satisfactory, as the distance along the regression line between points 5 and 6, and also between points 7 and 8 is small No point lies on the regression line and the perpendicular distance from each point to the line is quite large However, the frequency range of 50 to 180 (7.8 - 28 kHz) is relatively "noiseless", so this frequency span is probably the one to investi¬ gate more thoroughly when looking for frequencies particularly suited for the description of cavitation Thus, as a preliminary conclusion- It is possible to use acoustics to discriminate between levels of cavitation, but this model is not able to predict the different cavitation levels.
7.3 Testing Fuel Valves of the Ship Main Engine The intention of testing the main engine is: 1 To test whether, or not, it is possible to acoustically characterise a working fuel valve. 2. To test if it is possible to use acoustics to predict the performance of a working fuel valve. 3 To collect acoustic spectra from all main bearings, the first spectra later being compared to new spectra made at the same locations and under the same running conditions. By comparing the two sets of spectra one is seeking to quantify any changes in bearing performance.
The engine tested is a four stroke engine (manufactured by Krupp MaK Machinenbau GmbH Germany, 1981), the shaft output of which being 3550 kW, and the rotational speed 375 rpm or 6.25 1/s. The engine is connected to a variable pitch propeller and a shaft generator. Dunng all tests the engine ran idle on gas oil
The fuel valve, a spring-loaded needle valve, is by many engineers considered to be one of the most important components in an engine. The performance of the valve governs to a large extent parameters vital to the combustion process A defect valve, or a valve exhibiting bad performance, will always result in higher fuel consumption, due to a non- optimal combustion, and may also cause severe damage to the piston and or liner. A good fuel valve opens at a specific, predetermined pressure. It atomises and distributes the fuel well and evenly under varying load conditions and instantaneously closes totally at the end of the fuel injection period. If these criteria are not met, the valve is not performing well. Performance can be thought of as a function of drift in opening pressure, atomisation and ability to stay closed. Under normal operation, these paramet¬ ers are very difficult to monitor and if performance could be detected and predicted acoustically, a great achievement would have been made.
All fuel valve tests were performed on cylinder number eight, located at the non-driving end of the engine. The fuel valve (there is only one for each cylinder) is positioned in the centre of the cylinder cover. Due to the fact that the vessel was at anchor, all tests were performed with the engine running at idle speed and with "zero" load. In order to check whether or not changing the pump index would significantly change the acoustic spectrum, the pump index was also raised from 10 to 13 units in one experiment, giving a load increase in this cylinder. The accelerometer was mounted on the fuel valve by screwing a special stud into the accelerometer base. This stud was later glued onto the fuel valve using cyano-acrylate glue. The gluing operation had to be carried out after fitting the fuel valve onto the cylinder cover, thus introducing an error, as it was not possible always to reproduce exactly the same fixing point. However, the error intro¬ duced in this way, is considered to be without significance for the end result. Prior to the collection of each spectrum, the engine was allowed to run idle for several minutes in order to achieve stable operating conditions. Exhaust temperature, pump index, fuel valve condition and lubricating oil pressure was recorded during each experiment.
Since the goal was not only to check for acoustic spectral changes due to the perfor¬ mance of different fuel valves, but ultimately to prε. ot the performance of an operating fuel valve, one way of quantifying the performanc ad to be found. As mentioned earlier, performance is a complex function depenα., g on several factors. In addition, influence from an environmental factor, due to great density, viscosity and temperature differences between the combustion space environment and the work shop test bench where the separate testing of the valves was performed, must be taken into considera¬ tion. To compensate for such obstacles, quantification of the performance had to be carried out in the following manner: Before fitting a new fuel valve in the engine, the valve's performance was examined by the Chief Engineer and the inventor, by testing it extensively in the test bench. The same procedure was performed when a valve was removed from the engine. Each fuel valve was given a grade on a scale ranging from 1 to 5, and the result recorded. Grade 1 was considered best and grade 5 was given to the valve with the worst performance. Later a description of the behaviour and relevant data concerning the different fuel valves used in the test were given to an engine specialist for consideration and grading. The profesionnal's evaluation and the that made on site, onboard the vessel, proved to be quite consistent, and the resulting grade was used as a measure of fuel valve performance for the valves tested.
A total of eleven spectra were recorded during the test. First a spectrum was taken of the original valve already fitted in the engine. This spectrum was named a1. The fuel valve was then removed, inspected and graded before being refitted. The new spectrum was named a2. The performance of this original valve was given a grade of four, since its opening pressure was 250 bars instead of the correct one, 260 bars. In addition, the valve was "peeing", giving the fuel off as jets instead of atomising the fuel when the fuel pressure was gently raised towards the opening pressure. When the pressure was applied rapidly, it started to atomise even though the high pitched rattling noise of the moving needle could not be heard. The valve labelled o1 in the test was an "OK" valve taken "off the shelf" among spare and overhauled valves. It operated well at the specified opening pressure and was given grade one. After being tested in the engine, this valve was taken out and the opening pressure lowered from 260 bars to 160 bars. This lowering of the opening pressure would result in a really bad performance due to early introduction of the fuel into the combustion space. The low tension in the valve spring could also probably induce an uneven fuel cut-off at the end of the fuel injection period, causing after-burn and carbonisation. This fuel valve was given the grade five (i.e. the worst) and was labelled c in the test. The object labelled d in the test is the same valve as was used in the previous tests, labelled o1 and c, but now the opening pressure was raised to 360 bars.
A situation where the opening pressure of a working fuel valve increases during opera¬ tion, is physically not possible, but was taken into consideration anyhow, since a valve could accidentally be set with a too high opening pressure, and also because it would be interesting to see where such a valve would position itself in a scores plot. ( A too high opening pressure is undesirable because it puts extra strain on the high pressure fuel pump. Even if it atomises well, fuel drops could reach the cylinder liner and/or the piston, causing extensive burning and erosion of the metal surface.) Its performance was rated three, the valve being labelled d. The fuel valve was extracted, opening pressure adjusted back to normal specifications, and testing and grading performed before refitting in the engine as replicate r1 , and given the rating one. Fuel valves are manufactured with an extremely high degree of precision, which implies that they are sensitive to impacts in their nozzle region. The nozzle can easily be damaged when putting the valve down on its nozzle. Unfortunately, in practice, this often happens because the crew does not think of, or even worse: does not know the implications . This situation was tested by giving the nozzle of the next test valve a gentle blow with a small hammer. This valve is labelled f in the data set and its performance was rated three.
The properly functioning fuel valve, o1 , was tested with two different pump indexes, namely 10 and 13. These two objects are labelled e10 and e13, respectively. Finally a new overhauled fuel valve in good working condition, r1, was mounted in the engine. Two more replicates, labelled r2 and r3, were later taken from this fuel valve, such replicates, of course, also being of graded one.
In this case, the X matrix consists of 11 objects and 256 variables from the acoustic spectra, and the Y matrix contains one variable describing the performance of the eleven fuel valves tested. The eleven objects and their respective performance grading used as the Y matrix, are presented in Table 2 below.
Table 2
Y The Fuel Valve Test
Object name Performance Comment a1 4 Original valve, opening at 250 bars, peeing, 750 hours a2 4 Replicate of a1 c 5 Ok valve, but opening press set to 160 bars
3 Ok valve, but opening press set to 360 bars e10 1 Ok valve, pump index set at 10 e13 1 Ok valve, pump index set at 13 f 3 Damaged nozzle, bad atomisation at low pressurs Ok at high pressures
01 1 Ok valve r1 1 eplicat test of another Ok valve r2 1 Replicates of r1 r3 1 Replicates of r1 The X matrix was checked in the usual manner by looking at the data matrix plot. Then, centering and scaling was performed. The Y-vector is delineated in Table 3. A model, (F_Va-0), was then made, initially with four PLS components and leverage correction. From the resulting residual Y-variance and the explained X-variance plots (not shown ) it was evident that no more than two PLS components had to be used in the modelling. These two components explain 95% of the variance in Y using only 30% of the variance in X. This fact indicates that the X-matrix contains a lot of "noise" that cannot be correlated to Y. Knowing all the different components, processes and all "sounds" produced in the vicinity of the fuel valve, this fact is easy to understand. Based on these findings the F_Va-1 model was made. This model is a test-set-switch validated and modelled with two PLS components. The resulting t1u1 score plot is shown in Figure 22, showing clear groupings where replicates also are grouped nicely together.
Clear groupings are observable in the plot representing the three different conditions. No outliers are present as the objects in the lower left and the object in the upper right corner both represent extremes. The objects in between are not located too far out from the regression line between these extremes. Objects a1 , a2, d and f represent some intermediate value between fuel valves having good performance, (e10, e13, oland r1 ,..,r3) and the fuel valve exhibiting the worst performance, namely valve c. The t1t2 score plot from the PLS1 model, shown in Figure 23, demonstrates the same tendency as the t1 u1 plot of Figure 22. Fuel valves having good performance are grouped together on the left side of t2, while the faulty ones are located to the right. This indicates that PLS component 1 describes mainly performance. An object moving to the right in the plot, looses performance. Interpreting the phenomenon in the direction of PLS component 2 is not obvious, and is therefore left for later studies. Indeed this is "how the acoustics see the world" (isolated).
From Figure 23 it can be deduced that changing the fuel pump index from 10 (object e10) to 13 (object e13) does not influence the plot much. The two objects are grouped together in between the three replicates r1 , r2 and r3. They appear as replicates of one another. Therefore, there is reason to believe that their different locations in the score plot is not caused by a change of the pump index, but is as much a result of the engine being stopped and started, valves being changed, etc. At this stage, a conclusion would be:
- It is possible to use acoustics in order to characterise the performance of a working fuel injection valve; indeed, it should be possible to set up a first order prediction PLS-model. - Degrading performance manifests itself as a movement to the right in the t1t2 score plot, see Figure 23.
- he effect of changing the pump index from 10 to 13 is not detectable, and changes within this range will therefore not interfere with performance.
In the previously used X and Y matrices, there are eleven objects all shown in Table 2. If these matrices were used directly for prediction, it would imply that some of the situations described by these eleven objects would influence the model more than the others, as some of them actually are only replicates of each other. Moreover, the model would have been made under the assumption that it contained 11 degrees of freedom, while it actually contains only 6 degrees of freedom. This would result in an over- optimistic prediction. Taking the average of replicates representing the same situation would make this object to appear only once in the matrix to be modelled. In this way all situations were to be given the same weight in the model.
Hence, Table 3 below shows the resulting objects to be used for modelling after averaging. Objects a1 and a2 have been combined to a single object called Orig, and objects e10, o1 , r1 , r2 and r3 are combined into one object called Ok. The X matrix then consists of 6 objects and 256 variables, while the Y matrix contains a six element performance vector. In the previous modelling of the fuel valve performance, two PLS components were used. PLS component 1 explained more than 80% of the variance in Y. Since the number of objects was almost halved in this new model, it is reasonable to say that no more than one component should be used in this analysis. Full cross- validation was used with matrices centered and scaled.
Table 3 j ie Fuel Valve Performance Prediction Test (Average replicates)
Old name New name Performance Comment Y a1,a2 Oπg 4 Onginal valve, opening al 250 bars, peeing c Bad 5 Ok valve, but opening press set to 60 bars d High 3 Ok valve, but opening press set to 360 bars e13 ln13 1 Ok valve, pump index set at 13 f Nozz 3 Damaged nozzle, bad atomisation at low pressurs Ok at high pressures e10,o1,r1,r2.r3 Ok 1 Ok valve In this case one PLS component explains 80% of the variance in the Y space of the PLS1 model, using only 36% of the variance in X, leaving 63% as noise. A loading weights plot (not shown) for this new PLS1 model indicates that almost all frequencies contribute to PLS component one (describing performance). A significant shift appears at variable 82 (12,8 kHz), frequencies above this shift contribute mainly having positive loading weights, while variables from 25 to 82 (3.9 - 12.8 kHz) represent negative loading weights. As before, it can be seen that frequencies below 3.9 kHz contribute with two or three peaks giving both positive and negative loading weights. It is quite possible that by using frequencies between 4 kHz and 33 kHz one could produce just as good a model.
Figure 24 shows predicted values plotted against measured values of performance. Even as few as six points give a relatively good prediction. If the prediction had been 100% successful, all points would have been placed on a straight line with a slope equal to one. As can be seen, this is not the case, but a slope of 0.8 and a correlation of 0.9 is quite satisfactory. This plot demonstrates that it is possible to predict the performance of a fuel valve by the use of acoustics alone (i.e. by using a suitably calibrated PLS model, of course).
Hence, It is possible to predict the performance of a fuel injection valve simply by examining its acoustic spectrum, and it has been demonstrated that it is feasible to monitor fuel valve performance by acoustic chemometrics.
7.4 Testing the Main Bearing of the Ship Main Engine As is well known in the shipping industry, ship owners suffer great losses due to main engine breakdowns. Such breakdowns often result in replacement of bedplate and/or crank at an average cost, in each case, of about USD 1 mill. The "clinical reason" for such breakdowns is typically "absence of oil film, which causes friction, and thereby overheating", most often caused by a sudden turning of the main bearing shell, thus efficiently cutting off all lubricating oil to the bearing. But what happens before the turned of the dearing shell? Would it be possible to use acoustics to get some early warnings of this defect? - and, more fundamentally: What do spectra from nine main bearings look like in a score plot?
The purpose of the main engine bearing test is to:
- Collect spectra from the different main bearings, for comparison in a score plot. - Collect spectra for later comparison with spectra originating from the same bearings half a year or one year later, for example, to facilitate inter-temporal monitoring.
The only access to the main bearings is through the crank case doors. These doors have to remain shut close while the engine is running, both for safety reasons and to avoid leakage of lubricating oil. On the other hand, the accelerometer has to be physically in contact with the bearing, whereas the signal acquisition unit and the computer is located outside the crank case. This minor problem was overcome during signal acquisition by using a thin Teflon coated coaxial cable passing through a little passage made in the crank case door packing. The accelerometer used for this purpose was a Kistler accelerometer, type 8702B25, with a measuring range of ±25 g and a sensitivity of 201 mV/g, the resonant frequency of the accelerometer being 54.0 kHz.
In order to get the best possible signal, the accelerometer had to be placed as close to the bearing shell as possible. Both the accelerometer and its cabling had to be fastened in a secure and reliable way, so that it would not loosen during the test run. If it loosen¬ ed, the accelerometer would get lost or even worse: impose a damage on the engine itself. The following solution was chosen: A special stud was glued to a spare main bearing bolt nut, the accelerometer screwed onto the stud and secured with a nylon strip. Similar positions were used on all bearings in this test.
The data set consists of nine objects and the usual 256 frequency variables. The different objects are numbered from 1 to 9, the nurnoer directly reflecting the correspond¬ ing location of the bearing in the engine. Bearing number one is located in the driving end of the engine. No Y matrix was included in the data set.
The usual standardisation and centering procedure and leverage correction was used in a bearing model labelled bear_0. In the calibrated residual X-variance plot (not shown) three principal components explain 80% of the variance in X. In this "noisy" environ- ment, using three components could mean over-fitting the model by modelling noise. Therefore, a new model, bear_1 , was made with two components and leverage correc¬ tion. In the model score plot (not shown), bearings number 3, 4, 6 and 8 are grouped together in a cluster indicating that they are similar in some ways and positively correlat¬ ed to bearings 7 and 5 along PC 1. All these bearings are negatively correlated to bearing 1 , 2 and 9. Especially there is a clear difference between bearing 1 and 5. Bearing 1 is located closest to the gear, bearing 5 and 4 are considered to have the heaviest loads, and bearing 9 is close to the lubricating oil gear-pump and the vibration damper. In order to understand what phenomenon PC1 or PC2 describes, more information is needed. The information needed may be found in different maintenance reports or logs concerning this specific engine. For instance, it is possible that an answer can be found by studying the crankshaft deflection. Without such information, which was looked for, more detailed interpretation of the phenomena along PC1 or PC2 will not be possible at this stage.
In the PCA loading plots (not shown) for the model, all frequencies above 3.2 kHz have positive loadings for PC1 , but show a very large "dip" around 33 kHz. This is not the case for PC2 which has a clear shift from positive to negative loadings around variable 162, or a frequency of 23.3 kHz. Both plots show that loadings due to frequencies below variable 50 (7.8 kHz) appear to be of a more varying nature.
In order to test the impact on the PCA score plot, variables containing frequencies ranging from DC to 7.8 kHz (variables 0 to 50) and above 32.8 kHz (variables 210 to 256), were removed. A new model was made without these 96 variables, the new model being based on frequencies ranging from 7.8 to 32.8 kHz. During the modelling, leverage correction was used and the model labelled bear-3. When frequencies below 7.8 kHz and above 32.8 kHz were removed, more of the X-space variance could be modelled. Figure 25 shows the resulting score plot when frequencies below 7.8 kHz and above 32.8 kHz are removed.
The calibrated X-variance plot (not shown) shows a slight difference. By removing frequencies below 7.8 kHz and above 32.8 kHz, the explained variance raised from 73% to 80%. This indicates that the frequencies identified contain some "noise" not related directly to the description of the bearing phenomenon. The score plot did not changed much. The plot shown in Figure 25 has been mirrored along PC1 , but the relative positions of the different objects are almost kept constant. These findings show that PCA is robust and not easily tilted by "noisy" variables. It also demonstrates quite clearly that in this case the modelling can be performed with 37% of the original variables (frequencies) removed. In later "bearing surveyes" frequencies between 7.8 kHz and 32.8 kHz are obviously the ones to consider when performing this kind of analysis. Hence, the above leads to the conclusion that:
- No detailed interpretation is possible due to lack of problem related information. - A frequency range spanning between 7.8 and 32.8 kHz will produce an adequate PCA score plot.
- PCA is robust towards "noisy" variables.
8. CONCLUSION
The impeller wear and damage monitoring, and the cavitation test, led to the following findings:
- A PCA performed on the X-matrix alone was not able to discriminate between normal wear and an induced damage on the impeller. - A PLS1 -discrim performed on the X-matrix with a designed Y-matrix was able to clearly discriminate between the four experimental set-ups; a new impeller, a worn impeller, an induced damage and secondary induced damage on the impeller.
- All three sensor positions can be used for wear and damage monitoring, but sensor position two is preferable, if only one sensor is to be used for monitoring normal running conditions.
- Wear and damage can be detected from measuring locations taken on the connecting pipelines and flanges at different distances from the pump housing.
- It is possible to use acoustics to discriminate between levels of cavitation.
- It was not possible to predict the different cavitation levels. - Variables containing frequencies between 7.8 to 31.2 kHz will be suitable for monitor¬ ing wear, damage and cavitation.
On the engine, tests were performed with the intention to find whether or not it is possible to acoustically characterise a working fuel valve and if it is possible to use acoustics to predict the performance of the same valve. The tests led to the following findings:
- It is possible to characterise the performance of a working fuel injection valve.
- It is possible to predict the performance of a working fuel injection valve simply by examining its acoustics spectrum. - Variables containing frequencies between 4 to 33 kHz will be suitable for monitoring a working fuel injection valve.
Summing up, the conclusion is that there are quantifiable, precise relations between the acoustic signature and the physical condition of a component. Industrial Applicability
For the purpose of condition based maintenance and control, it is possible to envisage a "Chief-in-a-Box", i.e. an electronic unit incorporating the necessary interface for the reception of signals from various kinds of sensors. The signals may be processed, analysed and presented in real time by integrated processing and a user-friendly interface for the presentation of the results. Thereby, an engineer may add all the knowledge and capabilities included in the "Chief-in-a-Box" to his own knowledge and experience, and hence increase safety and save costs.
The method according to the present invention is, however, in no way restricted to marine applications only. Hydroelectric plants, for example, may be a future application area of equipment made in accordance with the invention, being capable of foreseeing, warning and ultimately preventing disasters from taking place. Also, in a saw mill, the invention may be used to monitor the density of wood being cut, for example, to auto- matically adjust the cutting speed, or to monitor the saw blade wear to indicate when the blade needs to be mended or replaced.
The AMCM method should be of great interest to industry, since the method will give increased functionality, early warning and will also satisfy the demand for a simple, robust and cost-effective measuring equipment. AMCM may be used for the following purposes: On- or off-line prediction of condition and performance of intemal components in machinery such as:
- Main or auxiliary engines:
1. Fuel valve performance 2. Bearings; scoring and increasing friction
3. Combustion process
4. Blow-by of combustion products
5. Mechanical wear, abration or erosion
6. Leakage in suction and exhaust valves - Compressors
7. Suction and pressure valves
8. Intemal gas leaks - Centrifugal pumps:
9. Impeller wear and damage
10. Damaged bearings
11. Wear followed by increasing leakage over the wearring
12. Fluid loss on the suction side of the pump (cavitation)
- Leakage or obstruction in process equipment, e.g. valves or heat exchangers due to e.g. clogging, scaling or contamination

Claims

Claims
1. A method of detecting and processing acoustic signals emitted from resiprocating, oscillating or rotating objects to record and predict changes in the condition of the objects, the method comprising:
- detecting and recording different types of signals emitted from said objects and having varying amplitude, wavelength or frequency, and
- processing said recorded signals mathematically, characte rised by the further step of treating the result of said mathematical processing by means of multivariate calibration so as to obtain information about the condition of said objects.
2. A method according to claim 1 , characterised by carrying out said multivariate calibration to determine the condition of said objects in terms of continuous wear, exhaustion, torsion, development towards flaws due to fatigue or excessive load, abrasion, deposits/settlements, tempera¬ ture changes, corrosion, crack formation, or tensions.
3. A method according to claim 1 or 2, characterised by carrying out said detection of signals emitted from said objects by means of acoustic sensors, such as som accelerometers or acoustic emission sensors, the detected signals optionally being amplified and/or pre-processed in a signal processing unit, and possibly stored intermediately, prior to said mathematical processing and multivariate calibration.
4. A method according to any of the preceding claims, characterised by in advance, producing a suitable multivariate model, preferably based on empirical date of the ideal condition of the objects, so as to allow the result of said multivariate calibration to be used to determine where the object is positioned betweem said ideal condition and a state of breakdown.
5. A method according to any of the preceding claims, characterised by prior to said step of multivariate calibration, establishing an experience data base holding knowledge already gained related to the behaviour of said objects, the output of said multivariate calibration being connected to said data base to evaluate condition changes quantified by the multivariate calibration as against data contained in said experience data base, for the purpose of indicating specific measure to be carried out on the basis of the data and condition changes recorded.
6. A method according to any of the preceding claims, characterised in that said mathematical processing involves carrying out a
Fast Fourier Transform (FFT), or that an Angular Measuring Technique (AMT) is employed, to obtain resulting spectra for immidiate or later processing by means of said multivariate calibration, in order to separate said conditions of the objects from one another.
7. A method according to any of claims 1 to 5, characterised in that said mathematical processing involves carrying out a joint time-frequency transformation, such as Gabor, Wavelets, or Wiegner-Ville, which simultaneously takes account of both the frequency and the time domain, to produce a robust characterisation of the detected signals.
8. A method according to any of claims 1 to 5, characterised in that said multivariate calibration comprises the use of multi- way analysis involving a plurality of variables in three or more dimensions.
9. A method according to any of the preceding claims, characterised in that said detected and recorded signals comprise signal components within a frequency band ranging from DC to MHz, a specific frequency range being optionally selected, preferably prior to said step of mathematically processing such signals.
10. An apparatus adapted to carry out a method according to any of the preceding claims, the appartus comprising:
- means to detect and record different types of signals emitted from said objects and having varying amplitude, wavelength or frequency, and
- means to process said recorded signals mathematically, characterised in that said apparatus comprises means to treat the result of said mathematical processing by means of multivariate calibration so as to obtain information about the condition of said objects.
11. An apparatus according to claim 10, c h a r a c t e r i s e d i n that said apparatus comprises means, such as a computer screen and coloured lights, to display the result of said multivariate calibration in a readily comprehensible manner to man,
12. The use of a method according to any of claims 1 to 9, to determine a change in the acoustic image detected from a turbine and caused by a change in the position of the turbine rotor, for example, thereby enabling prediction of an approaching breakdown.
13. The use of a method according to any of claims 1 to 9, for continuous condition analysis, diagnostics and/or optimisation of operation of single objects or complete devices, such as engines, motors, generators and separators of various kinds.
14. The use of a method according to any of claims 1 to 9, to detect formation of scaling, deformations or reductions in mills, or to determine the rate of crushing or particle size in milling or grinding equipment.
15. The use of a method according to any of claims 1 to 9, to predict tension or stress in a ship's rudder suspension or hull structure .
16. The use of a method according to any of claims 1 to 9, to imitate an engine operator's use of his senses, to predict, in a reproducible manner, changes in conditions appearing gradually in single components and complex machinery.
PCT/NO1997/000069 1996-04-11 1997-03-10 Acoustic condition monitoring of objects WO1997038292A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU24130/97A AU2413097A (en) 1996-04-11 1997-03-10 Acoustic condition monitoring of objects

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
NO961446A NO961446D0 (en) 1996-04-11 1996-04-11 Method for recording and processing acoustic signals from reciprocating, oscillating and rotating objects for recording and prediction of state changes
NO961446 1996-04-11
NO971017 1996-10-10
NO971017A NO971017D0 (en) 1997-03-05 1997-03-05 Acoustic multivariate condition monitoring

Publications (1)

Publication Number Publication Date
WO1997038292A1 true WO1997038292A1 (en) 1997-10-16

Family

ID=26648658

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/NO1997/000069 WO1997038292A1 (en) 1996-04-11 1997-03-10 Acoustic condition monitoring of objects

Country Status (2)

Country Link
AU (1) AU2413097A (en)
WO (1) WO1997038292A1 (en)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1041376A2 (en) * 1999-03-31 2000-10-04 Bayerische Motoren Werke Aktiengesellschaft Vibration acoustical diagnosis system for determining damages by vehicle parts
EP1111364A1 (en) * 1999-12-23 2001-06-27 Snecma Moteurs Damage detection of motor pieces
WO2002079646A1 (en) * 2001-03-28 2002-10-10 Aloys Wobben Method for monitoring a wind energy plant
GB2383635A (en) * 2001-10-31 2003-07-02 Tekgenuity Ltd Chromatic analysis of measured acoustic signals from a system
WO2003054503A2 (en) * 2001-12-07 2003-07-03 Battelle Memorial Institute Methods and systems for analyzing the degradation and failure of mechanical systems
US6945094B2 (en) 2000-12-22 2005-09-20 Borealis Technology Oy Viscosity measurement
US7457785B1 (en) 2000-08-25 2008-11-25 Battelle Memorial Institute Method and apparatus to predict the remaining service life of an operating system
CN101719410A (en) * 2008-10-02 2010-06-02 罗伯特.博世有限公司 Method and control unit for operating an injection valve
WO2010136746A1 (en) * 2009-05-28 2010-12-02 Halliburton Energy Services, Inc. Real time pump monitoring
US8370046B2 (en) 2010-02-11 2013-02-05 General Electric Company System and method for monitoring a gas turbine
WO2017001090A1 (en) * 2015-07-02 2017-01-05 Robert Bosch Gmbh Method for checking the functional capability of a pump designed to convey a fluid
US9752949B2 (en) 2014-12-31 2017-09-05 General Electric Company System and method for locating engine noise
WO2018122119A1 (en) * 2016-12-28 2018-07-05 Fritz Studer Ag Machine tool, in particular grinding machine, and method for determining an actual state of a machine tool
US10330022B2 (en) 2016-02-12 2019-06-25 General Electric Company Systems and methods for determining operational impact on turbine component creep life
US10760543B2 (en) 2017-07-12 2020-09-01 Innio Jenbacher Gmbh & Co Og System and method for valve event detection and control
DE102019211693A1 (en) * 2019-08-05 2021-02-11 Robert Bosch Gmbh Method and device for determining long-term damage to a component due to the application of vibrations
RU2745650C1 (en) * 2020-07-16 2021-03-30 Публичное акционерное общество "Транснефть" (ПАО "Транснефть") Bench for testing shaftless pump impeller elements
DE102019116340B4 (en) 2019-06-17 2021-10-21 A. Monforts Textilmaschinen Gmbh & Co. Kg Device for treating a web and method
GB2592814B (en) * 2018-10-15 2023-04-12 Phaedrus Llc Control system and method for detecting a position of a movable object
US11788879B2 (en) * 2018-05-18 2023-10-17 Maschinenfabrik Reinhausen Gmbh State analysis of an inductive operating resource

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2109552A (en) * 1981-10-15 1983-06-02 Gsm Electrical Controls Ltd Fault detection in machinery
US4422333A (en) * 1982-04-29 1983-12-27 The Franklin Institute Method and apparatus for detecting and identifying excessively vibrating blades of a turbomachine
US4683542A (en) * 1983-07-15 1987-07-28 Mitsubishi Denki Kabushiki Kaisha Vibration monitoring apparatus
US4989159A (en) * 1988-10-13 1991-01-29 Liszka Ludwik Jan Machine monitoring method
EP0415401A2 (en) * 1989-09-01 1991-03-06 Edward W. Stark Improved multiplicative signal correction method and apparatus
US5109700A (en) * 1990-07-13 1992-05-05 Life Systems, Inc. Method and apparatus for analyzing rotating machines
US5383133A (en) * 1991-11-02 1995-01-17 Westland Helicopters Limited Integrated vibration reducing and health monitoring system for a helicopter

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2109552A (en) * 1981-10-15 1983-06-02 Gsm Electrical Controls Ltd Fault detection in machinery
US4422333A (en) * 1982-04-29 1983-12-27 The Franklin Institute Method and apparatus for detecting and identifying excessively vibrating blades of a turbomachine
US4683542A (en) * 1983-07-15 1987-07-28 Mitsubishi Denki Kabushiki Kaisha Vibration monitoring apparatus
US4989159A (en) * 1988-10-13 1991-01-29 Liszka Ludwik Jan Machine monitoring method
EP0415401A2 (en) * 1989-09-01 1991-03-06 Edward W. Stark Improved multiplicative signal correction method and apparatus
US5109700A (en) * 1990-07-13 1992-05-05 Life Systems, Inc. Method and apparatus for analyzing rotating machines
US5383133A (en) * 1991-11-02 1995-01-17 Westland Helicopters Limited Integrated vibration reducing and health monitoring system for a helicopter

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1041376A2 (en) * 1999-03-31 2000-10-04 Bayerische Motoren Werke Aktiengesellschaft Vibration acoustical diagnosis system for determining damages by vehicle parts
EP1041376A3 (en) * 1999-03-31 2002-01-09 Bayerische Motoren Werke Aktiengesellschaft Vibration acoustical diagnosis system for determining damages by vehicle parts
EP1111364A1 (en) * 1999-12-23 2001-06-27 Snecma Moteurs Damage detection of motor pieces
FR2803036A1 (en) * 1999-12-23 2001-06-29 Snecma DETECTION OF DAMAGE TO PARTS OF AN ENGINE
US7457785B1 (en) 2000-08-25 2008-11-25 Battelle Memorial Institute Method and apparatus to predict the remaining service life of an operating system
US6945094B2 (en) 2000-12-22 2005-09-20 Borealis Technology Oy Viscosity measurement
WO2002079646A1 (en) * 2001-03-28 2002-10-10 Aloys Wobben Method for monitoring a wind energy plant
US6966754B2 (en) 2001-03-28 2005-11-22 Aloys Wobben System and method for monitoring a wind turbine
GB2383635B (en) * 2001-10-31 2005-06-15 Tekgenuity Ltd Improvements in and relating to monitoring apparatus
GB2383635A (en) * 2001-10-31 2003-07-02 Tekgenuity Ltd Chromatic analysis of measured acoustic signals from a system
US6853951B2 (en) 2001-12-07 2005-02-08 Battelle Memorial Institute Methods and systems for analyzing the degradation and failure of mechanical systems
WO2003054503A3 (en) * 2001-12-07 2003-11-27 Battelle Memorial Institute Methods and systems for analyzing the degradation and failure of mechanical systems
WO2003054503A2 (en) * 2001-12-07 2003-07-03 Battelle Memorial Institute Methods and systems for analyzing the degradation and failure of mechanical systems
CN101719410A (en) * 2008-10-02 2010-06-02 罗伯特.博世有限公司 Method and control unit for operating an injection valve
WO2010136746A1 (en) * 2009-05-28 2010-12-02 Halliburton Energy Services, Inc. Real time pump monitoring
US8370046B2 (en) 2010-02-11 2013-02-05 General Electric Company System and method for monitoring a gas turbine
US9752949B2 (en) 2014-12-31 2017-09-05 General Electric Company System and method for locating engine noise
WO2017001090A1 (en) * 2015-07-02 2017-01-05 Robert Bosch Gmbh Method for checking the functional capability of a pump designed to convey a fluid
US10330022B2 (en) 2016-02-12 2019-06-25 General Electric Company Systems and methods for determining operational impact on turbine component creep life
CN110167713B (en) * 2016-12-28 2022-04-26 弗立兹·斯图特公司 Machine tool, in particular grinding machine, and method for detecting the actual state of a machine tool
WO2018122119A1 (en) * 2016-12-28 2018-07-05 Fritz Studer Ag Machine tool, in particular grinding machine, and method for determining an actual state of a machine tool
CN110167713A (en) * 2016-12-28 2019-08-23 弗立兹·斯图特公司 The method of lathe, particularly grinding machine and the virtual condition for obtaining lathe
US10760543B2 (en) 2017-07-12 2020-09-01 Innio Jenbacher Gmbh & Co Og System and method for valve event detection and control
US11788879B2 (en) * 2018-05-18 2023-10-17 Maschinenfabrik Reinhausen Gmbh State analysis of an inductive operating resource
GB2592814B (en) * 2018-10-15 2023-04-12 Phaedrus Llc Control system and method for detecting a position of a movable object
DE102019116340B4 (en) 2019-06-17 2021-10-21 A. Monforts Textilmaschinen Gmbh & Co. Kg Device for treating a web and method
DE102019211693A1 (en) * 2019-08-05 2021-02-11 Robert Bosch Gmbh Method and device for determining long-term damage to a component due to the application of vibrations
RU2745650C1 (en) * 2020-07-16 2021-03-30 Публичное акционерное общество "Транснефть" (ПАО "Транснефть") Bench for testing shaftless pump impeller elements

Also Published As

Publication number Publication date
AU2413097A (en) 1997-10-29

Similar Documents

Publication Publication Date Title
WO1997038292A1 (en) Acoustic condition monitoring of objects
Dinardo et al. A smart and intuitive machine condition monitoring in the Industry 4.0 scenario
RU2480806C2 (en) Gas turbine operation analysis method
EP1836576B1 (en) A precision diagnostic method for the failure protection and predictive maintenance of a vacuum pump and a precision diagnostic system therefor
US10352823B2 (en) Methods of analysing apparatus
US6801864B2 (en) System and method for analyzing vibration signals
US7698942B2 (en) Turbine engine stall warning system
US20190004014A1 (en) Apparatus, systems, and methods for determining nonlinear properties of a material to detect early fatigue or damage
US8474307B2 (en) Method for detecting resonance in a rotor shaft of a turbine engine
CN115539139A (en) Method for monitoring safety of steam turbine
Mustapha et al. Structural health monitoring of an annular component using a statistical approach
Koshekov et al. An intelligent system for vibrodiagnostics of oil and gas equipment
Giurgiutiu et al. Review of vibration-based helicopters health and usage monitoring methods
Bhende et al. Comprehensive bearing condition monitoring algorithm for incipient fault detection using acoustic emission
KR20070087139A (en) A precision diagnostic method for the failure protection and predictive maintenance of a vacuum pump and a precision diagnostic system therefor
JP6497919B2 (en) Diagnosis method and diagnosis system for equipment including rotating body and its bearing
Qu et al. Aging state detection of viscoelastic sandwich structure based on ELMD and sensitive IA spectrum entropy
RU2769990C1 (en) Method for vibration diagnostics of dc electric motors using the wavelet analysis method
Kriston et al. Application of vibro-acoustic methods in failure diagnostics
Valeev Method of Defect Identification of Industrial Equipment via Remote Strain Gauge Analysis
KR100456841B1 (en) Method for processing detection signal in duration test of vehicle
Kumar et al. A Review paper on Vibration-Based Fault Diagnosis of Rolling Element Bearings
KR20020035527A (en) The Method and Device of the Reliability Test and Decision
Voitov et al. Procedure for studying the natural frequencies of the valve mechanism of the internal combustion engine
RU2682561C1 (en) Method for determining technical condition of current collectors

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AL AM AT AU AZ BA BB BG BR BY CA CH CN CU CZ DE DK EE ES FI GB GE HU IL IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK TJ TM TR TT UA UG US UZ VN AM AZ BY KG KZ MD RU TJ TM

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH KE LS MW SD SZ UG AT BE CH DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

NENP Non-entry into the national phase

Ref country code: JP

Ref document number: 97536098

Format of ref document f/p: F

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: CA