PEOPLEText: Miwa Yokoyama
Please tell us about your experiences at the Kuma Foundation, such as good things and impressive episodes.
The Kuma Foundation has a wide range of students, from students majoring in fine arts to students developing technology in the Engineering department. Therefore, when I explain my work, they figure it out in their own way. We were excited by the discussion such as “When you look at the work from the perspective of animation …” and “A work with a similar concept …” like that.
The students who belong to the Kuma Foundation have various connections. This is my personal stuff, but I had many indirect acquaintances, such as friends of friends and acquaintances of seniors at the university. That helped me with communicating with a variety of people. Among them, I often talked about common acquaintances.
Please tell us about the works exhibited at KUMA EXHIBITION.
At KUMA EXHIBITION 2022, I presented a work called “Focus Change“. In this work, as another interpretation of the noise canceling filter, we proposed a noise canceling filter as a “hearing aid” to listen to the sounds of the city. I think that the sound of a city is regarded as “noise” because various sounds collide with each other. The elements of each sound cannot be captured, and as a result they are treated as “noise” collectively. Therefore, by creating a filter that breaks down the sounds of the city into small pieces, such as “human voice” and “bird singing,” I provided an opportunity to listen to the sounds of the city once again. I presented a different usage from the current noise canceling filter usage, and pondered the possibility of technology that is limited to a specific application.
In the actual exhibition, images and audio recordings of city sounds that were recorded in various places in Tokyo were shown on the display. Then, the viewer used the controller and freely switched the filter to experience the difference in the sound heard. I also applied effects to the video to make the difference between the filters stand out more. In addition, I made the sound and the image synchronized. So when the “human voice” is heard, only the human figure is colored, and at other times, the human figure disappears.
What are the techniques that you used for the work and the length of production period?
I used AI technology for both video effects and audio processing on my production, “Focus Change.” For the video effects, I use a contour extraction technique called semantic segmentation that cuts out the human shape and an inpainting technique that complements the cut out human shape in the background. For the sound process, I apply the technology of separating music sound sources (dividing music sound into drums, bass, vocals and such) to separate environmental sounds into “human voice”, “traffic sound,” “bird singing,” and so on. The production period is usually about 6 months including the prototype of technology development. Since there are various models with the same technology, I took time to verify which model is compatible with which in order to express the concept of the work. In addition, it took longer because I was making the model do machine learning. After that, I was able to go shoot / record footage and make furniture. Those works are fun.
What were the impressive events at the KUMA EXHIBITION?
At the KUMA EXHIBITION, we were required to actively communicate with the viewers. Through the communication, I received an opinion from one of the viewers that my work could be applied to the medical / welfare field. I remember being surprised that some people think in a practical way because my main focus was to criticize the media.
Read more ...