Reflections on various applications of AI

Reflections on various applications of AI

By Rushan Ziatdinov

In March 2023, I published an article in The Korea Times, entitled “Use of ChatGPT and future of academia.” In that article, I predicted the emergence of several new AI-based tools and technologies in the future. Despite the criticism that AI tools are unable to create very novel concepts and may occasionally manifest hallucinations, I maintain an optimistic outlook on the future of applied AI. Nevertheless, one of my concerns is that AI and new technologies should not displace humans from their jobs but rather augment our activities as intelligent assistive technologies.

It is a sobering fact that there are millions of people worldwide who are blind and live in a world devoid of light. However, there are now free apps, such as Seeing AI, developed by Microsoft, that allow users to take a picture using a smartphone and convert it to text, thereby providing a description of the image. Furthermore, there are smart glasses developed at Caltech in Pasadena that can translate images into voices that can be understood intuitively without training.

These technologies will evolve and may be integrated into small wearable cameras or 360-degree cameras that can be located on the human body. Alternatively, they could be incorporated into footwear, enabling a blind person to hear and fully understand their surroundings instead of seeing them. This would allow them to understand objects, colors and, potentially, human emotions.

AI-based accessibility and assistive technologies can be used for real-time captioning for the deaf and hard of hearing, such as providing live captions for videos or during real-world interactions (e.g., lectures, meetings, public announcements), and for visual description for the visually impaired, such as converting images into descriptive text to help visually impaired users understand the content of images and videos.

Similar tools could also be employed for the training of blind animals, although the precise number of such animals is unknown. It’s known that animals communicate through a variety of signs, such as sounds and movements. In the future, AI could assist in the deciphering of animal language, thereby enabling us to comprehend the meaning of animal communication. Various AI tools could be utilized to facilitate communication with and the training of animals to live in the world of darkness.

According to the BBC, it’s estimated that up to 7,000 different languages are spoken around the world. What about dogs? Do they speak the same language, or do different breeds have their own languages? Can South Korea’s Jindo dog understand North Korea’s Pungsan dog? Good question. It is possible that scientists may use AI to answer questions about animal communication and understanding, despite the fact that human language and animal communication differ to a significant extent.

It is conceivable that intelligent image or video-to-text conversion technologies may be integrated into car black box cameras or more advanced lidars (or light detection and ranging sensors). Such a development could prove beneficial to car drivers by providing them with additional assistance in avoiding various dangers, such as pedestrians crossing roads outside designated pedestrian crossings or animals crossing roads at night, and the relevant information, in some cases, can probably be sent by a car driver to various navigation services, such as Naver or Kakao maps.

Super-intelligent cameras can integrate various AI technologies related to text, voices, gestures, images or videos. Could they assist life scientists in conducting natural experiments and developing automated reports? For example, if experiments are excessively lengthy or dangerous, this could be a valuable tool for wildlife monitoring. This could involve, for instance, researching the populations of wild boars in forests, of tigers and lions in a savannah or discovering various sea creatures that live deep underwater and are not yet known to science.

The behaviors of animals can be influenced by climate change, and by understanding their methods of communication with the help of AI, scientists can understand how these changes affect their communication, migration patterns and overall ecosystems, enabling better strategies for conservation and climate adaptation.

Speaking of wildlife, I occasionally undertake early morning walks through small city parks with trees, during which I hear a multitude of beautiful bird sounds. I am grateful to the birds for these beautiful melodies. It is possible that the birds are communicating with me or with each other, although I am unsure. In South Korea’s cities, there seems to be not much care for wild birds in terms of food provision, and I am uncertain as to where they feed themselves.

If AI is able to assist us in understanding bird sounds in the future, what insights might we gain from birds, and what requests might they make of us? Are the “wild” birds going to complain about the wild human beings who are pushing them away from their habitats?

Rushan Ziatdinov (www.ziatdinov-lab.com) is a professor in the Department of Industrial Engineering at Keimyung University in Daegu. He can be reached at ziatdinov.rushan@gmail.com.

Originally Appeared Here