Questo sito non usa cookies. I siti utilizzati per le statistiche visite e per la ricerca in questo sito potrebbero farne uso. 
Verificate sui relativi siti (Addfreestats, Freefind).

An integrated acoustic rendering system.

hall.jpg (6937 byte)

M.Giordano, L. Seno, 25th National Symposium of AIA, the Italian Acoustic Society, Perugia, Italy, 1997. May 21-23d .

One of the most fascinating topics in acoustical research of recent decades is certainly simulation and reproduction of synthetic sound fields. The huge efforts made to get a plausible (which doesn’t mean just audible) solution of this problem, are justified by the numerous fields of application of such a know-how. From room acoustics going to adaptive filtering and virtual reality, the need of mastering in great detail every perceptional cue (sometimes even in real time), is steeply growing. For this reason the complexity of models and algorithms is growing at the same rate, and demands greater and greater calculation resources. Also our research is concerned with these goals, though in recent times it led us to investigate new directions in the fields of sound processing and composition. We will try to explain what this path really is, and how it was conceived.

On one side we focused on the problem of audio signal processing implemented through a parallel architecture; thus the "Fly 40" system was born, a natural successor of the "Fly 30". Unlike the latter, which featured a single TMS320C30 chip, "Fly 40" is based (although the HW architecture is open) on one Ariel card, bringing four TMS320C40 DSP’s connected in a ring pipe fashion. Like the "Fly 30" system and several applications built around it, also "Fly 40" is going to be used by Centro Ricerche Fiat as a powerful environment for synthesis and simulation.

On another side we are dealing with the topic of spatialization. The whole process of simulation of an acoustic virtual space is a very complex one. First of all we need a satisfactory model to describe the propagation of the sound field over a well-defined space with the wanted degree of precision. We think of this model as a hybrid one: indeed, the wider the range of application of the model, the more we have to deal with different aspects of propagation such as diffraction, diffusion and normal modes. For this purpose we have developed a program which calculates the point-to-point impulse response of a space once the user has input all the features (geometry, source, receiver, medium, planes) of the scenery he intends to simulate. Great care has been taken in the modeling of reflections according to a spatial oversampling criterion and to the frequency dependence of air absorption. We have also provided some tools for off-line analysis of the results. Once the impulse response has been carried out, the subsequent step is its convolution with the input audio signal; this work may be quite efficiently accomplished by the above-mentioned DSP system, provided that the latter is brought to fit this task.

The experience we earned carrying out this research and the needs we reckoned with, led us to focus on a new architecture, which should be actually dedicated to the processing of audio signals. We think of it as composed of a general purpose host supported by a powerful audio acceleration unit via a proper communication system. This unit may be called "the transformer" by virtue of its skillness in executing its main task: to provide a fast and useful representation of a time-domain object, in any other frequency or time-frequency domain, and vice versa. This approach may not only provide a powerful tool for analysis and synthesis, but also cast a new light on the topic of composition, thus allowing the composer to build by himself different "sights" and transformations of the same musical material.



Search this site powered by FreeFind