We use cookies to give you the best experience possible. By continuing we’ll assume you’re on board with our cookie policy

Sound Engineering Principles Module SED1001 Brief; Explain briefly the physiological process of hearing in humans. Which factors affect our hearing in terms of perception of loudness and pitch, and what part does psychoacoustics play in our perception of sound? Then, give an overview of the main historical developments in recording sound. Which development do you think is the most significant and why? Student n°:1104830 Marking Tutor: Stuart Avery Date of submission: 23/11/2011 Word count: 1629 It is possible to explain shortly the hearing process, fundamental for any human being, as it follows:

Outer ear As a sound reaches the ear, this gets directed into the ear canal by the pinna, a funnel-like structure. The ear canal increases the sound pressure within the range of frequencies corresponding to the human voice, then the tympanic membrane, or eardrum, a boundary between outer and middle ear located at the end of the canal, reacts to the sound and starts to vibrate sympathetically. Middle ear The tympanic membrane’s vibrations set in motion the middle ear’s mechanism, which comprises three ossicles (stapes, incus and malleus) and two small muscles (tensor tympani and stapedius).

GET EVEN A BETTER ESSAY WE WILL WRITE A CUSTOM
ESSAY SAMPLE ON
Explain Briefly the Physiological Process of... TOPICS SPECIFICALLY FOR YOU

Tensor tympani and stapedius normally allow for the ossicles’ free motion , but they can tighten up and inhibit their action when the sound gets too loud, in order to prevent damage. (1) The role of the middle ear’s mechanism consists in transmitting the tympanic membrane’s vibrations to the fluid filling the inner ear. 10 Inner ear As the fluid is being set in motion, a real time spectral decomposition of the acoustic signal processed by the ear is carried out by the cochlea, an ‘hydromechanical frequency analyzer’ (2), which further provides a spatial frequency map of the sound to the vestibular nerves linked to the brain.

The cochlea has a coiled shape, and it is divided in three ducts, the vestibular canal, the tympanum canal and the cochlear duct. The floor of the cochlear duct is formed by a basilar membrane where lies the organ of Corti, consisting of two rows of rod cells, arranged on the membrane to form a minute arch to which four rows of hair cells are fixed finally, a minute fibre of the cochlear nerve is attached to each cell. (3) This is how the brain analyzes the sound: as the fluid is moving, the hair cells react to different frequencies, as each is “tuned” to a specific one.

Perception related to: Loudness and pitch The frequency response of the ear is not flat, and as we can notice by observing the curve of Fletcher & Munson, the loudness perception of the ear varies accordingly with the frequency of the sound perceived Referring to this graph, it is clear how 10 high and low frequency sounds have to be at a higher sound pressure level in order to be perceived with the same loudness as a medium frequency sound. Frequency range

It is proved that the bandwidth of a sound affects its perceived loudness, so generally, if two sounds with the same sound pressure level are playing together, the one with the widest frequency range will sound louder than the other. This is because the ear has a set of filters called critical bandwidths which vary with frequency and whose width is 1/3 of octave intervals. Therefore, if two sounds with the same sound pressure level fit both within a critical bandwidth, they will sound one as loud as the other.

On the contrary, if another sound with the same sound pressure level has a bandwidth that crosses the critical bandwidth in someway, it will sound louder than the other ones within it. The phenomenon of masking is strictly related to this, as it occurs whenever two sounds, which are close in frequency but have different sound pressure levels, play together. What happens is that the louder one will mask the quieter. (4) 10 Duration Another factor to be taken to consideration is the duration of the sound perceived, as it affects its perceived loudness as well as the other factors mentioned above.

As a matter of fact, this is related to the fact that the sound of speech is not even, but it is characterised by a multitude of peaks (caused by some specific consonants such as Ps, Ts and Ds), and this behaviour of the ear allows us to integrate these peaks within the rest of the sound flow. Generally speaking, the limit is above 200ms, below that duration, loudness is affected by sound’s length. (5) Direction The directional perception of sounds in the space is made possible thanks to a subconscious comparison of the nerve signals coming from both ears (Binaural Effect).

The four factors determining such phenomenon are delay time and intensity difference of the signals perceived by each ear, together with phase and timbre. To get into more detail, the delay time, or time of arrival, for each ear, is different whether the sound source is in front of the listener (0ms of delay between the ears) or next to him or her (up to 0. 6 ms), while the intensity of the signal reaching the closer ear is obviously higher. (6) The signal perceived by each ear also has a different phase, but only if its bandwidth is greater than the distance between the ears, I. e. below 500 Hz.

Finally, the timbre is also important, as the signal will lose high frequencies while reaching the farthest ear. 10 Environment and reflections Among the others, the Haas effect also plays an important role in our perception of sound. Named after the experimenter who quantified this behaviour of our ears, it’s now proved that the ear attends to the direction of the sound that arrives first and does not attend to the reflections providing they arrive within 50 ms of the first sound. The reflections arriving before 50 ms are fused into the perception of the first arrival, otherwise they will be perceived as echoes. 7) 10 Sound recording started in 1877, when T. A. Edison developed the first recording machine, the phonograph. (8) The machine comprised a small horn after which there was a flexible diaphragm connected to a stylus which cut an incision of varying depth into a tin foil in sympathy with the sound perceived, with an inverse behaviour on replay, reproducing the sound from the horn. Since then, the main aim of everyone involved in the recording techniques research, has always been to record and reproduce audio signals as faithfully as possible.

The next development was brought on by P. Berliner, who invented the gramophone, a disc phonograph whose physical support had the same shape as the records we still use today, and which was able to produce a much better quality sound. However, the machines developed by Edison and Berliner didn’t involve any electrical apparatus, the recording and reproduction process was completely mechanical, but around 1925 electrical recording, based on the principles of electromagnetic transduction, started to become widely used. That technology allowed to develop microphones, which could be connected remotely to a recording machine, and this involved an improved sound quality, with a wider frequency range and a greater dynamic range. The 30’s saw the introduction of the the first examples of experimental wire and tape recorders, based on the principles of electromagnetism, using either an electric charged coil and a metal tape wire or tape coated with magnetic material, later replaced by plastic tape, longer lasting and easier to handle.

The development of stereophonic sound, introduced in 1956 led in less than 10 years, to the invention of the first multitrack recorder; this machine offered for the first time the flexibility of recording different sources separately, and the possibility to overdub much more easily. (9) The first experiments in terms of digital technologies related to audio recording took place in 1967 by NHK, a Japanese broadcasting corporation, based on the the principle of pulse code modulation (PCM).

The PCM recorder comprises an encoder converting the audio source signal into a digital modulated signal, a decoder to convert the digital signal into an audio signal and a recording medium. (10) 10 The 70’s were marked by several developments in terms of audio recording, since video disc technologies, like the optical system, became the subject of intensive research. After years of improvements and agreements between companies, ‘the compact disc was finally launched on the consumer market in 1982’ (11), becoming an important part of the audio business as well as part of the popular music culture.

The need of a small audio recording device not necessitating a video recorder led to the developments of R-DAT and S-DAT (Rotary-head and Stationary-head Digital Audio Tape recorder) in 1987. Digital Audio Tape recording allows to record at higher, equal or lower sampling rates compared to a Compact Disc (48, 44. 1 or 32 Khz) at 16 bits quantization. (12) It is worth mentioning two of the most recent and important developments in this field which are strictly related to each other: MIDI and digital audio workstations (DAW).

MIDI (Musical Instrument Digital Interface), announced to the public in 1982, is a technology allowing devices that use digital controlled systems, like synthesizers, to communicate with each other, and enabling computers to be applied to the process of music-making for the first time. (13) 10 In 1989, Digidesign releases the first digital audio workstation system, Sound Tools, the “first tapeless recording studio”. (14) In conclusion, I believe that all technological developments are to be considered equally important, as each has been a necessary step for the others to be brought on.

However, the introduction of digital audio workstations, has definitely had a major impact on the music industry in the last 20 years. As a matter of fact, such development determined the beginning of a whole new era, leading to a consistent use of virtual instruments and virtual effects. Moreover, the costs of sound recording started to decrease, proportionally with the increasing efficiency of computer technologies. Consequently, it is evident how this passage influenced, and is still present in the sound of most music today.

In fact, electronic music, which first took benefits from the inventions of MIDI and DAWs, it is now widely used in any musical style, and it is playing a leading role in the music production culture. 10 Reference Notes McGraw-Hill 2007, pp. 1-2 Dallos 1992, pp. 1-2 http://www. daviddarling. info/encyclopedia/O/organ_of_Corti. html (accessed 9/11/11) SSR Resources Psychouacoustics, pp. 1-2 http://artsites. ucsc. edu/EMS/music/tech_background/te-03/teces_03. html (accessed 12/11/11) Beheng 2002, pp. 1-2

Howard & Angus 2007, pp. 85-86 Ramsey & McCormick 2006, p. 154 Ramsey & McCormick 2006, p. 155 Baert 1998, p. 7 Baert 1998, p. 14 Baert 1998, p. 19 http://www. midi. org/aboutmidi/tut_history. php (accessed 18/11/11) http://www. namm. org/library/oral-history/evan-brooks (accessed 18/11/11) Bibliography McGraw-Hill (2007) McGraw-Hill Concise Encyclopedia of Science and Technology, New York, NY, USA Dallos, P. (December 1992) ‘The Active Cochlea’, The Journal of Neuroscience, No. 12 SSR Resources (2011), Psychoacoustics

Beheng, D. (2002), Sound Perception, Cologne, DE Howard, D. & Angus, J. (2007), Acoustics and Psychoacoustics, Oxford, UK Ramsey, F. & McCormick, T. (2006), Sound & Recording, an introduction, UK ———————– [pic]http://www. deafnessresearch. org. uk/1974/how-we-hear/how-the-ear-works. html [pic]http://www. webervst. com/fm. htm [pic]SSR Resources, Psychoacoustics, p. 2 [pic]http://memory. loc. gov/ammem/edhtml/edcyldr. html [pic]http://stools. wix. com/media/fa2e07d. jpg

Share this Post!

Kylie Garcia

Hi there, would you like to get such a paper? How about receiving a customized one?

Check it out