Discussion:
cross-spectrum phase at high frequencies where signals are uncorrelated?
(too old to reply)
Angelo Campanella
2010-03-15 01:23:25 UTC
Permalink
I'm struggling with acoustical signal processing in that I want the
measure the phase shift caused by a device (microphone) at high frequencies.
My measurement capability goes beyond the normal cut-off frequency of that
device. I suspect that its output is minimal, virtually non-responsive (but
not quite) in the upper frequency range. My measurement method is to compare
that microphone signal from that of standard microphone whose phase shift at
high frequencies is already known. However, since two devices cannot be in
the same place at the same time, the device under test may be placed
"aligned" with the standard microphone, and the precise position of both
devices is difficult to adjust. Positional error is critical to phase
measurement. The wavelength of the high frequency sound in question is about
4 millimeters (corresponds to 360 degrees of phase shift).

My test instrument is a 2-channel FFT analyzer (Larson Davis 3200)
with cross-spectrum capability and computes for display the phase
difference between two signals across a wide frequency range, typically at
100 frequencies across that range. The test sound is acoustic "white noise",
received simultaneously by two different microphones.

I am of the opinion that If I can find a device position where
results are codified at the highest of frequencies, then that position is
proper.

Along the way, I blocked sound from entering the test microphone,
expecting a zero phase difference result on the count, since the only signal
remaining from the blocked microphone is the independent random noise of its
own amplifier circuit.

But lo and behold, the phase result by the analyzer, though a little
varied, was in the majority at plus 90 degrees (quadrature or imaginary! I
conclude that my knowledge of correlation mathematics is lacking.

So I ask you; what is the phase result when a purely random signal
is crossed with another independent signal having finite energy at all
frequencies?

A subsequent question is; what is the phase result expected when
high frequency sound energy signal rises to become comparable to (but not
yet greater than) the pure-random amplifier noise?

My ultimate use of this is to adjust the location of the test device
to always produce the phase result consistent with this codification, then
apply full sound level to accumulate credible test phase response data.

Anything you can offer along these lines would be greatly
appreciated.

Sincerely,

Angelo Campanella




--- news://freenews.netfront.net/ - complaints: ***@netfront.net ---
jerry
2010-03-15 16:05:54 UTC
Permalink
It sounds like you're getting a spurious result on your equipment
based on either its averaging settings (block length) or on the way it
treats phase unwrapping. You might try doing this analysis in the
digital domain using a .wav file or whatever raw data output you can
generate and processing the phase in e.g. Matlab or R. The phase
should be a zero mean process with distinctly non-Gaussian statistics--
making it hard to develop an estimate from noisy data.

J. Helffrich
        I'm struggling with acoustical signal processing in that I want the
measure the phase shift caused by a device (microphone) at high frequencies.
My measurement capability goes beyond the normal cut-off frequency of that
device. I suspect that its output is minimal, virtually non-responsive (but
not quite) in the upper frequency range. My measurement method is to compare
that microphone signal from that of standard microphone whose phase shift at
high frequencies is already known. However, since two devices cannot be in
the same place at the same time, the device under test may be placed
"aligned" with the standard microphone, and the precise position of both
devices is difficult to adjust. Positional error is critical to phase
measurement. The wavelength of the high frequency sound in question is about
4 millimeters (corresponds to 360 degrees of phase shift).
        My test instrument is a 2-channel FFT analyzer (Larson Davis 3200)
with cross-spectrum capability and computes for  display the phase
difference between two signals across a wide frequency range, typically at
100 frequencies across that range. The test sound is acoustic "white noise",
received simultaneously by two different microphones.
        I am of the opinion that If I can find a device position where
results are codified at the highest of frequencies, then that position is
proper.
        Along the way, I blocked sound from entering the test microphone,
expecting a zero phase difference result on the count, since the only signal
remaining from the blocked microphone is the independent random noise of its
own amplifier circuit.
        But lo and behold, the phase result by the analyzer, though a little
varied, was in the majority at plus 90 degrees (quadrature or imaginary!  I
conclude that my knowledge of correlation mathematics is lacking.
        So I ask you; what is the phase result when a purely random signal
is crossed with another independent signal having finite energy at all
frequencies?
        A subsequent question is; what is the phase result expected when
high frequency sound energy signal rises to become comparable to (but not
yet greater than) the pure-random amplifier noise?
        My ultimate use of this is to adjust the location of the test device
to always produce the phase result consistent with this codification, then
apply full sound level to accumulate credible test phase response data.
        Anything you can offer along these lines would be greatly
appreciated.
                Sincerely,
                        Angelo Campanella
answerman
2010-03-16 19:37:14 UTC
Permalink
Post by Angelo Campanella
I'm struggling with acoustical signal processing in that I want the
measure the phase shift caused by a device (microphone) at high
frequencies. My measurement capability goes beyond the normal cut-off
frequency of that device. I suspect that its output is minimal,
virtually non-responsive (but not quite) in the upper frequency range.
My measurement method is to compare that microphone signal from that
of standard microphone whose phase shift at high frequencies is
already known. However, since two devices cannot be in the same place
at the same time, the device under test may be placed "aligned" with
the standard microphone, and the precise position of both devices is
difficult to adjust. Positional error is critical to phase
measurement. The wavelength of the high frequency sound in question is
about 4 millimeters (corresponds to 360 degrees of phase shift).
My test instrument is a 2-channel FFT analyzer (Larson Davis 3200)
with cross-spectrum capability and computes for display the phase
difference between two signals across a wide frequency range,
typically at 100 frequencies across that range. The test sound is
acoustic "white noise", received simultaneously by two different
microphones.
I am of the opinion that If I can find a device position where
results are codified at the highest of frequencies, then that position
is proper.
Along the way, I blocked sound from entering the test
microphone,
expecting a zero phase difference result on the count, since the only
signal remaining from the blocked microphone is the independent random
noise of its own amplifier circuit.
But lo and behold, the phase result by the analyzer, though a little
varied, was in the majority at plus 90 degrees (quadrature or
imaginary! I conclude that my knowledge of correlation mathematics is
lacking.
So I ask you; what is the phase result when a purely random signal
is crossed with another independent signal having finite energy at all
frequencies?
A subsequent question is; what is the phase result expected when
high frequency sound energy signal rises to become comparable to (but
not yet greater than) the pure-random amplifier noise?
My ultimate use of this is to adjust the location of the test device
to always produce the phase result consistent with this codification,
then apply full sound level to accumulate credible test phase response
data.
Anything you can offer along these lines would be greatly
appreciated.
Sincerely,
Angelo Campanella
It sounds like you are trying to measure the response of an unknown
microphne relative to that of a known reference microphone at high
frequencies using a dual-channel FFT analyzer. If that is the case, the
microphones don't need to located be at the same place, but the measurement
of the two microphone signals needs to be made simultaneously and the sound
source, the refernce microphone and the microphone under test need to be on
the same line and located several feet from any room boundary. How many
feet depens on the frequency range of interest. If you are only interested
in the frequency range above 1kHz, place the reference microphone three
feet from the sound source and place the microphone under test 1.5 feet
behind the reference microphone. The excitation needs to be gated
broadband noise having a burst duration not much greater than 2mSec. In
the FFT analyzer, you must compensate for the time delay between the two
microphones so that the analyzed microphone signals are time aligned. You
also need to window (preferably Hanning) both microphone signals to
eliminate relfections so that the frequency response calculation is based
on direct sound only. If you've done everything correctly, the coherence
function should be unity or over the entire frequency range of interest.
Once you have the calculated frequency response (magnitude & phase), you
can unwrap any residual linear-phase portion and subtract the phase
response of the reference microphone. This method is the time-domain
equivalent of a two-channel TDS measurement. I have been using this method
for nearly three decades and I know that it works and that it provides
extremely accurate results.
Angelo Campanella
2010-03-17 04:39:36 UTC
Permalink
Post by answerman
Post by Angelo Campanella
I'm struggling with acoustical signal processing in that I want the
measure the phase shift caused by a device (microphone) at high
frequencies. My measurement capability goes beyond the normal cut-off
frequency of that device. I suspect that its output is minimal,
virtually non-responsive (but not quite) in the upper frequency range.
My measurement method is to compare that microphone signal from that
of standard microphone whose phase shift at high frequencies is
already known. However, since two devices cannot be in the same place
at the same time, the device under test may be placed "aligned" with
the standard microphone, and the precise position of both devices is
difficult to adjust. Positional error is critical to phase
measurement. The wavelength of the high frequency sound in question is
about 4 millimeters (corresponds to 360 degrees of phase shift).
My test instrument is a 2-channel FFT analyzer (Larson Davis 3200)
with cross-spectrum capability and computes for display the phase
difference between two signals across a wide frequency range,
typically at 100 frequencies across that range. The test sound is
acoustic "white noise", received simultaneously by two different
microphones.
I am of the opinion that If I can find a device position where
results are codified at the highest of frequencies, then that position
is proper.
Along the way, I blocked sound from entering the test
microphone,
expecting a zero phase difference result on the count, since the only
signal remaining from the blocked microphone is the independent random
noise of its own amplifier circuit.
But lo and behold, the phase result by the analyzer, though a little
varied, was in the majority at plus 90 degrees (quadrature or
imaginary! I conclude that my knowledge of correlation mathematics is
lacking.
So I ask you; what is the phase result when a purely random signal
is crossed with another independent signal having finite energy at all
frequencies?
A subsequent question is; what is the phase result expected when
high frequency sound energy signal rises to become comparable to (but
not yet greater than) the pure-random amplifier noise?
My ultimate use of this is to adjust the location of the test device
to always produce the phase result consistent with this codification,
then apply full sound level to accumulate credible test phase response
data.
Anything you can offer along these lines would be greatly
appreciated.
Sincerely,
Angelo Campanella
Dear Answerman:

Thaks for the detailed resonse. I'll try to go through the items one by
one.
Post by answerman
It sounds like you are trying to measure the response of an unknown
microphne relative to that of a known reference microphone at high
frequencies using a dual-channel FFT analyzer.
Exactly.
Post by answerman
If that is the case, the
microphones don't need to located be at the same place, but the measurement
of the two microphone signals needs to be made simultaneously and the sound
source,
That's the case.
Post by answerman
the refernce microphone and the microphone under test need to be on
the same line and located several feet from any room boundary.
OK
Post by answerman
How many
feet depens on the frequency range of interest. If you are only interested
in the frequency range above 1kHz, place the reference microphone three
feet from the sound source and place the microphone under test 1.5 feet
behind the reference microphone.
Why "exactly"? mind you, this is not "audio". The wavelength at 100 kHz
is 3.44mm. A millimeter error in placement amounts to 1bout 120 degrees.
Post by answerman
The excitation needs to be gated
broadband noise having a burst duration not much greater than 2mSec.
To get a braoadband ultrasound source that has both a smooth frequecy
spectrum and a useful intensity, I use an air jet source where the sound is
emitted in a small region just downstream of a tiny air jet running about
1/3 CFM. Millisecond pulsing is pretty much out of the question. The sound
source needs to be very broad band and reasonably intense. Electronic
devices are usually not broadband enough, thogh the ionophone might do it.
Post by answerman
In the FFT analyzer, you must compensate for the time delay between the
two
microphones so that the analyzed microphone signals are time aligned.
I suppose the positional correction precision is accomplished through
some sort of envelope matching. The fractional millimeter requirement
previously noted comes to mind as a challenge.
Post by answerman
You
also need to window (preferably Hanning) both microphone signals to
eliminate relfections so that the frequency response calculation is based
on direct sound only. If you've done everything correctly, the coherence
function should be unity or over the entire frequency range of interest.
Once you have the calculated frequency response (magnitude & phase), you
can unwrap any residual linear-phase portion and subtract the phase
response of the reference microphone.
Can this be done with a Larson-Davis 3200 Analyzer?
Post by answerman
This method is the time-domain
equivalent of a two-channel TDS measurement. I have been using this method
for nearly three decades and I know that it works and that it provides
extremely accurate results.
What is the highest sound frequency to which you applied this method?

To give you an idea a to where I have "progressed", I copy my note to
others on this matter:


1- Now I study the preamplifier to know whether it is phase
inverting or not.

When the LD3200 is used to determine the WM-60A package output phase
as compared to the conventional LD2520 condenser microphone at low
frequency, I find that the output of the WM-60A and the LD2520 condenser
microphone with a non-inverting preamplifier LD910B are in phase.

I am told that a positive pressure sound pulse causes a conventional
condenser microphone to produce a negative signal, while the same positive
pressure sound pulse will produce a positive signal for an electret biased
with a small positive voltage. Is that the case for the WM-60A?

Or is the WM-60A packaged already with a preamp that inverts the
signal?

I found an ST Application Note#AN1534 which reports a TS 971
amplifier that shows a grounded source amplifier that inverts the signal. Is
this amplifier, or one similar to it, packaged inside the WM-60A?

If so, then my phase measurement results at audio frequencies are
understood and codified.....

2- At 02:38 PM-05003/1/2010, Angelo Campanella writes further:

Since my last E-mail, I have re-checked the relative position of
the two microphones in the free-field. I determined that the relative
position is matched when the phase result vs frequency is unchanged when the
orientation of the microphone pair is rotated from pointing straight up
(grazing incidence) to pointing at the sound source (normal incidence).
I also made an "Injection voltage" check of the LD910B preamp
amplifier phase delay, if it exists. Here, the microphone is mounted with
its shield ungrounded (Teflon tape wrap in threads), then bridged to ground
with a low-ohm resistor into which test signal current is injected, driving
the ground, then obliging signal current to follow the same circuit as a
real sound signal. Comparison of the phase of this injection current to that
of the output signal will demonstrate any amplifier phase delay.
On doing this, I found +1.5 degrees at 1 kHz, nil at 7 kHz, -1
degree at 12 kHz, -3 degr. at 40 kHZ, and -8 degr. at 100 kHz. On first
glance, one could expect similar phase shifts of the electret preamp as
well, as it is apparently intended for the same purpose.
The two microphones are now mounted side-by-side and seemingly
aligned on the same phase front from a distant source.

3- I then inspect the LD3200 Analyzer Cross-Spectrum phase result at
around 90 kHz where the WM-60A supposedly has no response. But there is a
residual there, perhaps 30 dB or more below the audio response that is
sufficient to make a phase response value.

In theory, when an uncorrelated white noise is spectrum-crossed with
the output of an active microphone in a white noise sound field (LD2520 in
this case) I think the result should be zero degrees relative phase. The
signal from the WM-60A electret in this case is only the broadband
electronic noise from its amplifier circuit.

4- To study this, I removed the LD2520 capsule, shorting the center
contact to ground with a cap. In this condition, the WM-60A is still
producing signal. The phase result is random plus and minus 180 degrees out
to 66 kHz. Beyond 66 kHz, the phase difference is somewhat constant at
approximately +90 degrees. This makes be believe that there is slight
capacitance cross talk inside the LD3200 analyzer.

When the LD2520 capsule is replaced, both signals are strong enough
to effect a phase measurement over the entire 1-100kHz range. The resulting
value will slew strongly plus or minus many degrees when the position of
either microphone is advanced or retarded in the sound field.

5- A position of the microphone pair where a zero differential phase
shift result uniformly occurs above 85 kHz can be readily found by first
setting "by eye" the axial position of the face of the WM-60A to align with
the face of the adjacent bare LD2520 microphone. Then the final adjustment
is to slew the dowel carrying the microphone pair in azimuth a few degrees
until the 85 kHz to 100 kHz range of phase measurements all settle to be
near zero degrees.

In this position the phase shift over the 85 to 100 kHz range
presented by the WM-60A is assumed to be the same as that of the LD2520
microphone, namely about -140 to -180 degrees.

6- The question now is whether I have enough information. My position
is now:

a- It can be demonstrated that large (e.g. 100-300 degrees) differential
phase shifts are NOT demonstrated at super-high frequencies when the two
microphones are positioned precisely on the same phase front of sound from a
distant source, and one microphone is non-responsive except for its
amplifier self-noise. I am still troubled by the apparent cross-talk between
channels.

b- When the electret microphone is in broad band sound field, its output
still contains some acoustic signal at very high frequencies. But the phase
result is significantly affected (+ or - 200 to 400 degrees) when the
microphones are mis-aligned one way or the other especially at
mid-frequencies.... 20 to 60 kHz. These large phase shift values observed
correlate well with the microphone positional error for these frequencies.
The 90-100 kHz is less affected.

c- Phase results at low frequencies (e.g. 1kHz-20kHz) are relatively
reliably measured in this "Comparison" method.

Please, some comments by you will be welcomed...

Sincerely,

7- Angelo Campanella.



--- news://freenews.netfront.net/ - complaints: ***@netfront.net ---
answerman
2010-03-18 01:09:29 UTC
Permalink
"Angelo Campanella" <***@att.net> wrote in news:hnpmea$15bv$***@adenine.netfront.net:


snip......snip
Post by Angelo Campanella
Thaks for the detailed resonse. I'll try to go through the items one by
one.
Post by answerman
It sounds like you are trying to measure the response of an unknown
microphne relative to that of a known reference microphone at high
frequencies using a dual-channel FFT analyzer.
Exactly.
Post by answerman
If that is the case, the
microphones don't need to located be at the same place, but the measurement
of the two microphone signals needs to be made simultaneously and the sound
source,
That's the case.
Post by answerman
the refernce microphone and the microphone under test need to be on
the same line and located several feet from any room boundary.
OK
Post by answerman
How many
feet depens on the frequency range of interest. If you are only interested
in the frequency range above 1kHz, place the reference microphone
three feet from the sound source and place the microphone under test
1.5 feet behind the reference microphone.
Why "exactly"? mind you, this is not "audio". The wavelength at 100 kHz
is 3.44mm. A millimeter error in placement amounts to 1bout 120 degrees.
There are several reasons. One is to reduce the level of sound that is
reflected back to the reference mic by the mic under test. The reflected
sound causes ripples in the frequency response magnitude and phase, and
the amplitudes of those ripples is reduced as the separation distance
between the two mics is increased. Another is to maximize the frequency
resolution of the measuremnt. Frequncy resolution (recoprical of
measurement time window) is increased by increasing the analysis time
window that is free of reflections from nearby objects. In order to
obtain a frequency resolution of 500Hz, you need a reflection-free time
window of roughly 2mSec (Same as TDS). Last but not least, measurement
S/N is increased in direct proportion to the duration of the excitation
noise burst.
Post by Angelo Campanella
Post by answerman
The excitation needs to be gated
broadband noise having a burst duration not much greater than 2mSec.
To get a braoadband ultrasound source that has both a smooth frequecy
spectrum and a useful intensity, I use an air jet source where the
sound is emitted in a small region just downstream of a tiny air jet
running about 1/3 CFM. Millisecond pulsing is pretty much out of the
question. The sound source needs to be very broad band and reasonably
intense. Electronic devices are usually not broadband enough, thogh
the ionophone might do it.
For ultrasonic measurements in air, I use a Panasonic EAS10TH-1000 ribbon
tweeter which has a flat (+/-2dB) far-field response that extends from
below 5KHz to over 100KHz. Whether or not it has sufficient acoustic
output would depend on the senmsitivities and frequency responses of the
mics that you are using.
Post by Angelo Campanella
Post by answerman
In the FFT analyzer, you must compensate for the time delay between
the two
microphones so that the analyzed microphone signals are time aligned.
I suppose the positional correction precision is accomplished through
some sort of envelope matching. The fractional millimeter requirement
previously noted comes to mind as a challenge.
The initial gross time alignment is done by envelope matching, but the
final measurement time-alignment is achieved by incrementing the delay
between the two channesls in the FFT analyzer to eliminate the linear
accumulation of phase caused by a constant time delay. The final phase
response can be further tweaked by post processing to eliminate any
residual linear-phase component of the response.
Post by Angelo Campanella
Post by answerman
You
also need to window (preferably Hanning) both microphone signals to
eliminate relfections so that the frequency response calculation is
based on direct sound only. If you've done everything correctly, the
coherence function should beity or over the entire frequency range
of interest. Once you have the calculated frequency response
(magnitude & phase), you can unwrap any residual linear-phase portion
and subtract the phase response of the reference microphone.
Can this be done with a Larson-Davis 3200 Analyzer?
If you mean "entirely" with the 3200, I don't know. But, it can be done
with any FFT anaylzer that is used together with external boxes to
generate the burst excitation and window the responses.
Post by Angelo Campanella
Post by answerman
This method is the time-domain
equivalent of a two-channel TDS measurement. I have been using this
method for nearly three decades and I know that it works and that it
provides extremely accurate results.
What is the highest sound frequency to which you applied this method?
100kHz.
Post by Angelo Campanella
To give you an idea a to where I have "progressed", I copy my note to
snip.....snip


The appropach that I described requires a sufficient S/N ratio in both
channels in order to achieve a meaningful result. So, if the microphone
under test starts rolling off at 20KHz and is down 30-40dB at 100KHz,
obtaining adequate S/N at 100KHz is going to be very difficult if not
impossible. If that is the case, the only alternative of which I am
aware is a two-channel TDS setup.
Angelo Campanella
2010-03-22 05:14:09 UTC
Permalink
Post by answerman
Post by Angelo Campanella
Post by answerman
How many
feet depens on the frequency range of interest. If you are only interested
in the frequency range above 1kHz, place the reference microphone
three feet from the sound source and place the microphone under test
1.5 feet behind the reference microphone.
Why "exactly"? mind you, this is not "audio". The wavelength at 100 kHz
is 3.44mm. A millimeter error in placement amounts to 1bout 120 degrees.
There are several reasons. One is to reduce the level of sound that is
reflected back to the reference mic by the mic under test. The reflected
sound causes ripples in the frequency response magnitude and phase, and
the amplitudes of those ripples is reduced as the separation distance
between the two mics is increased.
I have seen some such ripples, but they are ot the primary concern at this
point.
Post by answerman
Another is to maximize the frequency
resolution of the measuremnt. Frequncy resolution (recoprical of
measurement time window) is increased by increasing the analysis time
window that is free of reflections from nearby objects. In order to
obtain a frequency resolution of 500Hz, you need a reflection-free time
window of roughly 2mSec (Same as TDS). Last but not least, measurement
S/N is increased in direct proportion to the duration of the excitation
noise burst.
Not much chance of doing pulsed noise atr this point.
Post by answerman
For ultrasonic measurements in air, I use a Panasonic EAS10TH-1000 ribbon
tweeter which has a flat (+/-2dB) far-field response that extends from
below 5KHz to over 100KHz. Whether or not it has sufficient acoustic
output would depend on the senmsitivities and frequency responses of the
mics that you are using.
That's a possibility in the future.
Post by answerman
Post by Angelo Campanella
Post by answerman
In the FFT analyzer, you must compensate for the time delay between
the two
microphones so that the analyzed microphone signals are time aligned.
That "compensation" mechanization escapes me..
Post by answerman
The initial gross time alignment is done by envelope matching, but the
final measurement time-alignment is achieved by incrementing the delay
between the two channesls in the FFT analyzer to eliminate the linear
accumulation of phase caused by a constant time delay. The final phase
response can be further tweaked by post processing to eliminate any
residual linear-phase component of the response.
If you mean "entirely" with the 3200, I don't know. But, it can be done
with any FFT anaylzer that is used together with external boxes to
generate the burst excitation and window the responses.
The measurements contianed in the LD3200 unit are:

Autospectrum,
Cross Spectrum
Transfer Function,
Coherence
Time.

Is there any significance of any of these regarding TDS?

Is it a matter of my dialing in a time value, then noting the spectral
result for each value?
Post by answerman
Post by Angelo Campanella
Post by answerman
This method is the time-domain
equivalent of a two-channel TDS measurement. I have been using this
method for nearly three decades and I know that it works and that it
provides extremely accurate results.
What is the highest sound frequency to which you applied this method?
100kHz.
OK... that's adequate for now.
Post by answerman
The appropach that I described requires a sufficient S/N ratio in both
channels in order to achieve a meaningful result. So, if the microphone
under test starts rolling off at 20KHz and is down 30-40dB at 100KHz,
obtaining adequate S/N at 100KHz is going to be very difficult if not
impossible. If that is the case, the only alternative of which I am
aware is a two-channel TDS setup.
It looks like I'm in a "Catch-22". But I want to solve the riddle
regardless.

Dwelling on your "90 degrees at resonance" benchmark, it may follow that
while signals are good from both mics, one would expect that each diaphragm
with experience a 180 degrees phase shift at the high end of their useful
range, and before higher modes of vibration become significant. Additional
lagging may occur up to a limit, but not exceeding 180 degrees. At even
higher frequencies, the reduced gain will cause the second signal, that from
the test electret in this case, will fade toward nil, and the formulation of
the cross-correlation algorithm becomes such that the computed phase will
wither. For the lack of a better notion, simply cause the phase measurement
result to fade also toward nil.

If this is indeed the case, then a phase result where the phase lag of
the test microphone does reach 180 degrees above the resonance. At still
higher frequencies, where a small positional error will cause the phase
result to balloon to high plus or minus computed values that are not
microphone phase shifts, but now confidently known to be due directly to
misalignment.

A closed solution occurs when we fully accept that the phase lag cannot
exceed 180 degrees, and that diminished sensitivity at the highest
frequencies well out of their range of test mic sensitivity; the phase shift
indicated must be relatively small to nil.

Do I have that right?

Ange.



--- news://freenews.netfront.net/ - complaints: ***@netfront.net ---
Answerman
2010-03-22 20:20:13 UTC
Permalink
Post by Angelo Campanella
Post by answerman
Post by Angelo Campanella
Post by answerman
How many
feet depens on the frequency range of interest. If you are only interested
in the frequency range above 1kHz, place the reference microphone
three feet from the sound source and place the microphone under
test 1.5 feet behind the reference microphone.
Why "exactly"? mind you, this is not "audio". The wavelength at 100 kHz
is 3.44mm. A millimeter error in placement amounts to 1bout 120 degrees.
There are several reasons. One is to reduce the level of sound that
is reflected back to the reference mic by the mic under test. The
reflected sound causes ripples in the frequency response magnitude
and phase, and the amplitudes of those ripples is reduced as the
separation distance between the two mics is increased.
I have seen some such ripples, but they are ot the primary concern at
this point.
Post by answerman
Another is to maximize the frequency
resolution of the measuremnt. Frequncy resolution (recoprical of
measurement time window) is increased by increasing the analysis time
window that is free of reflections from nearby objects. In order to
obtain a frequency resolution of 500Hz, you need a reflection-free
time window of roughly 2mSec (Same as TDS). Last but not least,
measurement S/N is increased in direct proportion to the duration of
the excitation noise burst.
Not much chance of doing pulsed noise atr this point.
Post by answerman
For ultrasonic measurements in air, I use a Panasonic EAS10TH-1000
ribbon tweeter which has a flat (+/-2dB) far-field response that
extends from below 5KHz to over 100KHz. Whether or not it has
sufficient acoustic output would depend on the senmsitivities and
frequency responses of the mics that you are using.
That's a possibility in the future.
Post by answerman
Post by Angelo Campanella
Post by answerman
In the FFT analyzer, you must compensate for the time delay between
the two
microphones so that the analyzed microphone signals are time aligned.
That "compensation" mechanization escapes me..
I am only familiar with three FFT analyzers....HP3562A, B&K2032 and
B&K2035. All three have the capability of shifting (delaying) the
digitized time record in channel B (mic under test) relative to the
digitized time record in channel A (reference mic). If the time records
are not shifted to compensate for this delay, the calculated phase of the
frequency response function will contain a large linear phase component
which at high frequencies will overwhelm the minimum phase component that
you are trying to measure. Also, if the time records are not shifted to
compensate for this delay, the coherence function will be low, which
indicates that frequency response calculation is in error and invalid.
This is so whether or not the excitation is transient/burst or continuous.
Post by Angelo Campanella
Post by answerman
The initial gross time alignment is done by envelope matching, but
the final measurement time-alignment is achieved by incrementing the
delay between the two channesls in the FFT analyzer to eliminate the
linear accumulation of phase caused by a constant time delay. The
final phase response can be further tweaked by post processing to
eliminate any residual linear-phase component of the response.
If you mean "entirely" with the 3200, I don't know. But, it can be
done with any FFT anaylzer that is used together with external boxes
to generate the burst excitation and window the responses.
Autospectrum,
Cross Spectrum
Transfer Function,
Coherence
Time.
Is there any significance of any of these regarding TDS?
No.
Post by Angelo Campanella
Is it a matter of my dialing in a time value, then noting the spectral
result for each value?
No. The first step is to guess the time delay based on the physical
separation of the mics, dial it in, and make a new measurement. Then
increment/decrement the delay to obtain a phase response that is free of
gross accumulation of a linear phase shift. When the excitation is gated
and the response is properly windowed, the coherence function will be
unity. If continuous excitation is used and there are strong reflections,
the coherence function will be low and the frequency response calculation
will be in error. In order to get high coherence, the calculation must be
based on direct sound only.
Post by Angelo Campanella
Post by answerman
Post by Angelo Campanella
Post by answerman
This method is the time-domain
equivalent of a two-channel TDS measurement. I have been using this
method for nearly three decades and I know that it works and that
it provides extremely accurate results.
What is the highest sound frequency to which you applied this method?
100kHz.
OK... that's adequate for now.
Post by answerman
The appropach that I described requires a sufficient S/N ratio in
both channels in order to achieve a meaningful result. So, if the
microphone under test starts rolling off at 20KHz and is down 30-40dB
at 100KHz, obtaining adequate S/N at 100KHz is going to be very
difficult if not impossible. If that is the case, the only
alternative of which I am aware is a two-channel TDS setup.
It looks like I'm in a "Catch-22". But I want to solve the riddle
regardless.
In order to make an assessment I would need to know what you are using as a
reference mic, what mic you are trying to calibrate and the specific
geometry of the measurement setup that you are using..
Post by Angelo Campanella
Dwelling on your "90 degrees at resonance" benchmark, it may follow that
while signals are good from both mics, one would expect that each
diaphragm with experience a 180 degrees phase shift at the high end of
their useful range, and before higher modes of vibration become
significant. Additional lagging may occur up to a limit, but not
exceeding 180 degrees. At even higher frequencies, the reduced gain
will cause the second signal, that from the test electret in this
case, will fade toward nil, and the formulation of the
cross-correlation algorithm becomes such that the computed phase will
wither. For the lack of a better notion, simply cause the phase
measurement result to fade also toward nil.
If this is indeed the case, then a phase result where the phase lag of
the test microphone does reach 180 degrees above the resonance. At
still higher frequencies, where a small positional error will cause
the phase result to balloon to high plus or minus computed values that
are not microphone phase shifts, but now confidently known to be due
directly to misalignment.
A closed solution occurs when we fully accept that the phase lag cannot
exceed 180 degrees, and that diminished sensitivity at the highest
frequencies well out of their range of test mic sensitivity; the phase
shift indicated must be relatively small to nil.
Do I have that right?
The actual situation at 100kHz is likely far more complicated that you are
assuming, and I'd rather not engage in speculation based on assumptions.
The first step is to get a valid/accurate 2-channel measurement with a
cohcerence of 1.0, or close to it.


.
Answerman
2010-04-16 19:45:48 UTC
Permalink
"Angelo Campanella" <***@att.net> wrote in news:hnpmea$15bv$***@adenine.netfront.net:

snip.....snip
Post by Angelo Campanella
To get a braoadband ultrasound source that has both a smooth frequecy
spectrum and a useful intensity, I use an air jet source where the
sound is emitted in a small region just downstream of a tiny air jet
running about 1/3 CFM. Millisecond pulsing is pretty much out of the
question. The sound source needs to be very broad band and reasonably
intense. Electronic devices are usually not broadband enough, thogh
the ionophone might do it.
snip....snip


Do you have any measurements of the spectral content of your air-jet source
out to 200kHz. If not, what would you expect theoretically? The upper
frequency limit of my leaf tweeter is 110kHz, afterwhich it rolls off
pretty rapidly. I am looking for an alternative sound source that is
capable of producing a reasonably flat spectrum (eg +/5dB) over the
frequency range from 5kHz to 200kHz. At least temporarily I can live with a
continuous non-pulsed excitation.

Loading...