Keynote and Closing Session

Session Chair : Dr. Raja Kushalnagar

Transcripts

Nikita: Dr Manohar Swaminathan is the principal researcher at Microsoft Research India, where he is part of the technologies for Emerging Markets Group. He is an academic and technology entrepreneur turned researcher with a driving passion to build and deploy technology for positive social impact. His research and development focus over the past two decades has been able to create technology solutions for the global south with simple and significant examples. His current research is applying to diverse areas of disability, LUDIC design for accessibility, a new methodology, which puts play and playfulness central to all technology solutions for accessibility. May I now request Dr Manohar to take over the session. Thank you.

 

Dr Manohar: Thank you so much, Nikita. It’s a pleasure to have Dr. Raja Kushalnagar with us at this conference. By the way, there is a second interpreter who is interpreting in ASL for Raja’s benefit. And I personally have not attended an event where there are interpreters in two different sign languages. It is indeed a pleasure to watch these two signatures going on to signings going on. Welcome, Raja. It’s really a great pleasure so that that he could make it here. So Raja is director of the Information Technology program in the Department of science, technology and mathematics at Gallaudet University, the world’s as far as I know, the only university for the Deaf in the world. His research interests are very interestingly, intersect accessible computing and intellectual property law with the primary goal of improving information access for people with sensory disabilities. So in the accessible computing field investigates Information and Communication Access, disparity among people with disabilities. Whereas in the law of the legal field, he advocates for laws and policies for access and inclusion for people with sensory disabilities. So he’s on the organizing board committees that focus on inclusion of persons with disabilities in computing fields, including the computing research association, widening participation, teach access, and the SIKAI group of ACM. He has mentored over 100 undergraduates and 10 graduate students. He has received over 4 million in grants and has published over 70 peer-reviewed publications. I’ve had the pleasure of visiting Raja in his lab with the university, and it’s a fantastic experience to see an entire university, which it is, is actually a bilingual University with American Sign Language and English. And it’s really fantastic. And I’m looking forward to Raja’s presentation today, in which we’ll touch upon all of this. Welcome Raja and over to you.

 

ASL Interpreter (Interpreting for Raja Kushalnagar): Great, and thank you so much for that wonderful welcome. By the way, there is also a second interpreter here. So one of the interpreters will be speaking, the other interpreter will be signing. So I just wanted to be clear and start with some housekeeping and some ground rules with myself as the presenter here. And actually, I was going to mention something, but I think I’ll hold on to that and just go ahead and start with my presentation. I wanted to start by sharing my slides.

 

Well, first off, I wanted to say that it’s so great to meet Manohar and Gallaudet University does represent and talk lots about connecting technology with the geographical north and south and also writing papers in regards to how to develop technology that is accessible, and also has that global impact. So I’m going to touch a bit on both of those. And again, those will come up a bit later, as I proceed through the presentation.

 

I wanted to introduce myself, my name is Raja Kushalnagar. And I don’t care how you pronounce it, because there are lots of different ways that you can say it with an American accent with an Indian accent with a deaf accent. So Manohar is absolutely right. It can be said so many different ways, but it’s really important that it’s spelt correctly. So either an English language or Indian language. That’s how you can see the accessibility piece of this particular part of my name.

 

So with introductions being over, I wanted to speak a bit about Gallaudet University. Gallaudet University uses both American Sign Language and written English, which means that both languages need to be accessible visually, for it to be completely accessible. And as I grew up in India, when all through high school, my parents had a unique situation in which the school was a small private school, the school of a friend, the teacher was a friend of the family. And it became very easy to understand the people that were there, I was able to liberate them. I was able to understand the writing, I had the support that was required for me. And that was a pretty good experience. And then my uncle said, why don’t you apply to American University. So I decided to do just that I applied and was accepted.

 

And I moved to the University of California in Berkeley and it was really shocking to see such a difference in culture, accessibility. And some classes may have 500 students in the auditorium setting, you can’t really see the teacher, it’s very difficult to understand what’s going on. So that was very hard to transition into. I would get bad grades, and I’m used to A’s and B’s and then those B’s became C’s. And then I started to realize, okay, maybe it’s time to consider what success looks like for me. At that point, I decided to move to a much smaller university where there were just a few people in class and it was much easier to follow what was going on in class. I graduated and then I started to pick up sign language here in the United States. But things were starting to become fully accessible for social settings as well. And regardless of whether or not I was one on one, reading lips, or using sign language in a social environment, or writing emails, or business communications, all of that started to become much more accessible, I was able to get that full accessibility with being able to use those oratory skills as well.

 

So now, I’d like to speak specifically about Gallaudet University, which is the only university that stands out it’s the only university for the deaf and hard of hearing in the world. And there are some places that have deaf programs. But they are part of a much larger university. However, sometimes, the clear focus of research and education innovation is not quite there in regards to deaf and hard of hearing. So that’s where Gallaudet has the real benefit of standing out as a university and a model.

 

So this university was established in Washington, DC, which is the capital of the United States, it’s about two to three miles from where Congress meets at the US Capitol. So there are lots of historical perspectives. So a teacher, a French teacher, moved from France to the United States to receive the money to establish a university or college, Gallaudet University was chartered by the President of the United States. And ever since that charter, anyone that graduates from Gallaudet University has the sitting President’s signature on their diploma, they also provide the Premier Support for deaf and hard of hearing students, and for all countries for all areas. So again, the uniqueness of Gallaudet is is very, very rich. So I’ll move on to my next slide.

 

So I’d like to allow you a moment to read, because it’s interesting and very important to be able to look at the information visually before going back to access it through the sign language interpreter. And that is a best practice that we put in use at Gallaudet University in our trying to model throughout our presentations.

 

So typically, we like to start with statistics, we look at both country and worldwide. So this information we see here, that deaf and hard of hearing, really do represent a small amount. Some folks have enough hearing to be able to access auditory, some people are more visual. So we break those out into two categories. The importance of this information is that the cultures of all communities, countries and everything needs to be considered. And there need to be ways envisioned that can support this type of accessibility across the board.

 

Now also, I’d like to say, especially looking at this grid of people, not everyone is the same, you know, and it’s the same in India, not everyone is the same there. Every deaf person is different, they have different background, they grew up different oftentimes with parents who were hearing and didn’t sign, they may have been the only individual in school, their support system may be much different. So all of the different ways and experiences that are within one person can’t be considered would work for everyone. Everyone has a different experience. So there really is no common experience for everyone to do this.

 

There’s a variety of support that is added when bringing in visual language as well. I’d like to compare the differences between audiological properties or characteristics and visual properties or characteristics. Now, when considering audio logical properties, you think about the effects of all the audiological properties around you, such as the intensity. And what that means, for example, you can keep talking to other people. When you are one on one, you don’t have to talk loud, you can still talk softly to each other while other background noise is going around going on around you. If you are trying to talk to the larger group, then you would talk louder. So you could also whisper if you want to keep information quiet. Because someone across the room or someone a bit further away from you would not be able to understand what you were saying. So there are different types of information that you can look at there. So looking at the space within the room, that’s one thing that would be considered.

 

Now let’s talk about visual properties or characteristics. Think about the visual size that a person has. It is linear. And we know that doubling the distance reduces acuity by two. So if you’re chatting with someone in ASL, you use that visual size to any visual field to be able to see other speakers in the room. One benefit, because of that doubling of the size, if it’s across the room, or across the street, or what may have you. Somebody may have an advantage whether it’s visual or an auditory advantage. So we see the pros and cons to that situation, a way of taking in information auditorily is often preferred, and it can happen in multiple ways. The reason that I wanted to talk about those two comparisons is because of the influence on technological design and access. And this will come up as well in our next few slides.

 

So now if we go back to the diversity with communication styles for deaf and hard of hearing individuals, and this is an important component when considering technological Access and Design. So I’ve asked a number of graduates from my program, what their preferences are, or what how communication happens when they are at their internships or full time employment. So in turn, meaning that they’re a trainee, they’re gaining experience in the workplace, with the goal of transitioning into full time employment.

 

So as you can see, from this graph here, about 20 individuals share their experience in the workplace, when communicating with primarily hearing individuals. And you can see that there is more than one option for a preferred way to communicate. So yes, a number of them are have had speech training and are able to use their voice for themselves. But an equal number of them preferred written communication, whether that be text, email, instant messaging, and a few of them have chosen other ways is maybe over the video phone or over the phone. I just want to emphasize that there is quite a variety of preferences in communication. And it’s all dependent upon their own personal experience, the situations in which they find themselves and what might work best in that given situation. And that’s something we’ll talk a bit more about as we go along in the presentation.

 

So you might be asking why I’m connecting this with speech when I’m talking about deaf and hard of hearing people, they might not be able to hear their own hoarse voice clearly, which might impact the quality of communication. So there might be a need for additional visual aids to support or augment.

 

So as you can see, in this scale, they have rated themselves. And only one individual said that other people understand everything I say. Most of them lie in the middle, saying that they understand about half of what I say or just a few of my words. So what this means is that we know communication is critical. And for that reason, we don’t want to rely on someone having to use their voice only because it might result in misunderstanding. If communication is less crucial, maybe it’s more informal, then sure someone might feel more comfortable using their voice or not relying on another communication preference. And of course, this is my opinion.

 

So we asked a number of individuals to read a text so within the group of individuals deaf and hard of hearing, specifically, we asked them why they might use captions, and in which situations might they use captions to support any auditory information. The hard of hearing viewers have said that if they can’t make out in individual words, it becomes frustrating, a bit taxing, and they don’t want to work too hard when they’re trying to enjoy something. Or simply making speech louder is not enough to understand. So I use captions all the time.

 

And we see the clear benefit of visual support to language. And we know that sometimes captions aren’t perfect. Sometimes they’re too fast to read, sometimes they come on the screen in a very jilted way. So an interpreter might be preferred and maybe the doesn’t have correct punctuation, which makes it more difficult to comprehend what is being said.

 

So just something to keep in mind that visual support and accuracy are both critical when it comes to captions. This is an example of what a hard of hearing person might hear or access in spoken language as you can see their words but the words are nonsensical. So in this example, it could have been accented speech.

 

So this is a children’s story called Little Red Riding Hood. That’s what this text was from. Maybe you’re familiar, maybe you aren’t but then now you see what was actually being stated. If you read and compare the two you can understand quickly how those words sound similar. However, the message is completely different, the terms are completely different.

 

Again, just one example of where if you don’t have the best sound quality, then you’re certainly going to need the visual support. So now we’ll shift gears a bit to design and accessible technology specifically for deaf and hard of hearing. I do want to note here that communication is often designed for hearing individuals who can watch and listen at the same time and take in information in both modalities. So if we move the audio support to visual support, or if we are watching two different visual things at the same time. So specifically, meet reading and scanning reading and scanning and vice versa.

 

In a predominantly hearing environment where you have a deaf signer, the Deaf signer will be at a disadvantage, because they are automatically waiting for the signs and then something to be shown. And this is something that extends the amount of time for communication to happen. So one benefit to visual communication is that and then that conduit then in this situation would be an interpreter.

 

So when we talk about providing access in a predominantly hearing environment, whether that’s through via communication or information like a presentation, it’s important to, it’s important that it be designed in a way that takes the visual access into an account. So you might want to match that auditory information with the visual information to make sure you are accommodating that individual.

 

So now I’d like to show an example of how eye gaze is important and how it’s crucial that you give enough time for an individual to be able to take in information that they have to read and then also to attend to the sign language that’s being shared. So this is an example of following the eye gaze following a presenter who’s using visual aids and taking in the information and it’s timestamp to give me one moment.

 

So, you see how the eyes move during this type of presentation. If it is a hearing person in a predominantly hearing environment, they are mostly watching taking information and they might be reading something on their own. So time must be given. Some people are more more auditory and some are more visually focused. So of course this just depends. So the point of this is to not rush is to give people time to process that information, whether they are processing in a more or auditory fashion or more visual fashion we’re talking about language captioning, or taking into sign language, it is a very similar process. And I’ll give that example now.

 

So we’re following the eye gaze here to show where the Deaf participant is looking during a presentation. In the previous slide, the captions were presented in a word by word fashion where one could access the information that was shared previously, but sign language like speech is linear.

 

Meaning you’re not going to be reading something left to right, it’s going to be taken in a different visual fashion, there’s no need to scan or to change the eye gaze from side to side to attend to the two different messages. So in a sense, it’s a bit easier on the eyes. However, you might lose that historical access to information because you can’t see the information that was shared prior to that if you were just reading the captions, so the person is looking at who’s signing their face, their body language and all in their visual field. And they’re taking they’re able to take in more content just from looking in one page specifically, as opposed to merely reading captions and accessing only a written form of a language. So do keep in mind that when you’re providing access, different people have different priorities, priorities and preferences. So for me, I think captions are great because the contents important when I’m watching a lecture, however, if I’m in a social situation exchanging in a discussion, I would like to have an interpreter because I want to be able to take in all of the rich context from those speakers in the room. Also, the interpreter will help me to identify whose topic talkie, so I need less time for the eyes to scan to locate the speaker.

 

So that’s another thing to think about when you are designing different types of technologies for communication aspect access. Do you want to share some best practices, providing access like an interpreter or captions, if there is a presenter or some sort of demonstration that’s taking place in a lab, for example, you want to always give time of brief delay between what is being talked about what’s being shared, what your might be looking at, and what is being said.

 

And you want you might want to reduce the basically that reduce the temptation to show something or demonstrate something and speak about it at the same time. And this benefit will be felt by everyone because they can see something and then hear the explanation.

 

Additionally, you want to give enough time for questions and discuss and discussion. We know that there is going to be a brief delay between the information that’s being shared and the information as it’s being interpreted if there’s an interpreter in the room, so you want to give ample time for that information to be processed. And then for questions to come back to you.

 

Similar to auditory noise, there is a concept of visual noise and by visual noise, meaning something that is interfering with the ability to see clearly something that might be distracting or might be physically blocking someone’s view, say you’re in a classroom and there’s a computer monitor in front of you or there’s not ample lighting in a room in which you’re presenting. So I’ll show a few examples of visual noise in this slide here have a brief video about how visual noise can impact learning.

 

So in this situation, we have a student who is using captioning in the classroom. So the teacher is speaking next to a whiteboard. And the student is having to look and change their eye gaze to take in all of that information.

 

You also might notice the distance, the lighting, and possibly the need to reduce some of that visual noise to make the learning environment the most successful or effective. There are muscles in the eyes that contract or expand to be able to take in information is as if it’s further away, whereas hearing information or taking information and auditorily is more passive. So you want to do what you can to lessen the load on those muscles.

 

For deaf and hard of hearing individuals, we have recommendations that they sit in the front of the room with clear visual sightlines. So it’s clear sightlines to be able to take in the information as it’s being received. You want to avoid any blocking visuals that might be in front of those people. And also you want to share ensure ample lighting.

 

I’ve worked with accessible technology engineering. And I have ever had this quote, who is was the academy president for a national engineering organization said I believe that it’s a highly creative profession in need of diverse perspectives, without the diverse perspectives, you will not have the solutions that you need. If you’re not getting those ideal solutions, then you’re not going to be making the best technology available.

 

So as an example, a reason why deaf and hard of hearing should be incorporated into the design of accessible technology after the COVID pandemic hit a number of organizations and different software’s came about like zoom, for example. And they realize that everything that they had was on a bit of outdated software, outdated software that made auditory information the priority, whereas zoom was making the visual the priority. So it’s the concept of the talking heads right the simplest way to put it you have a picture a video of someone, you’re seeing their face their head only and taking it information. Whereas we realized we needed a range we needed the ability to prioritize that visual information, or also while also having the preference to prioritize the auditory information to meet the needs of everyone, and over time, we’ve seen this improve. And so now both hearing and deaf people benefit from the technology that we see that we’re using today, like zoom, because they can access the information visually, and auditorily. A lot of the design of zoom was influenced by deaf users input. For example, the ability to pin multiple people, the use of the spotlighting system. And we see the ability of captions as well that had really great impacts for users in America and internationally, internationally. And that was that happened because of the input of users and requests that were made to make some changes.

 

Just one second, we’re going to switch interpreters. So now I would like to talk about language and technology. So, so far, I’ve talked about how technology can both give and take access to and from people. And back in the 1900s, there was a film industry, but the auditory parts, or the stereo part, have not been developed yet. So this was the silent movie industry. And it was around for about 30 years. And deaf and hearing people had equal access to what was going on in that movie during that timeframe.

 

So in the 30s, 1930s, finally, they were able to add sound to movies, and at that point, it became a bit different. At that point, and again, deaf people have had access to the movies all that time. And then when the auditory information became the priority, they lost that access. So there was an essay that was written in a national organizations journal, by a deaf person that talked about that loss of access. And they dreamed that there would be a technology solution coming forward that would restore that access. And you can see here on the screen. Now deaf and hard of hearing people at that point, were not involved with technology development. And about 10 or 20 years or so. Deaf people ask the government to start to support captioning of movies.

 

Now that did begin, but it was never a high priority. And it took a long time until about the 1970s to achieve anything within that field. Adding captioning to all movies at that point. nowadays, everything is captioned. So we also have automatic speech recognition, which is not perfect, but it is available. So the technological advances are clear.

 

Now in regards to Universal Design and Technology with captions, it’s not only deaf people that are using captions today. Now in the field of captioning and the users of captioning, sure. There’s a large percentage of people that are not deaf or may actually be hearing that also access these captions.

 

For an example there are two groups of users Deaf and Hard of Hearing users. Now there are larger groups of users, or those are the two largest groups of users but there’s also people who learn English as a second language. They would like to have both the auditory and the visible stimulus to be able to enjoy watching or accessing information. They may not enjoy watching it just listening to it. They need to also be able to read it, and then about 20% of people are hearing that access those, those captions. Now, that’s pretty big. But if you look at it, 20% of the users that use captioning are actually people that can access the information auditorily. So my point of this slide here is that accessible to technology really is universal design, and it’s beneficial for all people.

 

And also, oftentimes, if we don’t have access to the archaeological information, if there are a lot of TVs that are running simultaneously, such as in a restaurant, or in a large public area, it’s entirely too noisy, people are not able to access that information. So that also influences popular design am policy. So that’s an additional benefit to captioning for all people. Again, that’s just one example.

 

So in the sense of time, let’s talk about diversity in regards to captioning. Now, universal design still does include flexibility to accommodate the different needs of different individuals. And that’s definitely improved over the last 40 to 50 years. For example, our research here at Gallaudet University has focused on how to create captions that are more accessible for different reasons. So a common problem has been identifying who is speaking, whose speech is being shown on the caption does the speaker is that are they identified within the stream text, oftentimes, it’s just the content that’s nothing about who’s providing the content or who’s saying the words that are being captioned. So, we have to connect that somehow. And there are some different ways that that can be happening, you can have captioning, that also identifies the speaker. Or also, you could have the location or some type of indicator that indicates who is speaking when a particular caption is on the screen.

 

Now, we found that in most situations, just leaving the captions where they are at the bottom, that’s typically what happens. So it becomes a bit confusing if there’s back and forth happening on the screen. So it doesn’t really work in all situations to have the captions in a uniform location.

 

Now, my next example that I’d like to share explains why speaking is typically faster. It’s usually 100, or 100, to 130 words per minute. And then, the captions, which of course, be presented line by line, it’s typically a half second to a second per line. So you have to be very careful that the captions don’t disappear super quickly because the content is missed. For example, if you have the back and forth between two individuals, I’d like to show you this video might give you an idea.

 

So there’s a joke about overlapping speech. If you’re listening to it, it may be funny. But it can be really confusing when you’re trying to look at the words and comprehend the content. Folks will often laugh they really have no idea what’s being said they’re lost. So that back and forth is not really effective as much better to have the captions and the uniform location between the two speakers on the screen low enough that it doesn’t distract from the picture.

 

So again, these are examples of this design that has improved over time and benefits all people. Another example of universal design. And this is a pretty expert example. So Google and Gallaudet University work together to be able to work on a live captioning or an automatic speech recognition program to benefit people who need to use that for communication, it’s Quickstart, it’s readable, it reduces background noise, it is typed. So with this now, the result of that collaboration has been that this particular application didn’t didn’t downloaded millions of times in different languages, including languages used in India.

 

So my parents use a dialect, Kannada, in India. And as an example, I can use that when I go to India to visit and interact with my family. So the whole point that I’m trying to drive home today is that understanding and design and the contributions that are provided to the wealth of knowledge in the technological field, it’s really important to know that yes, hearing people do recognize the need and the necessity of including people who are deaf and hard of hearing, but also that some of the technologies that are emerging are benefiting both the hearing and the deaf and hard of hearing communities as well. They don’t think about the technology as being developed for them, but rather, including them in the development, asking for the opinions and the input from people that will be using this technology, which means that the process has to be a bit changed, such as before the design is made, asking for that input, and really including the end-users from start to finish. So listening that input and those ideas from the community involving them in testing, soliciting feedback, and having them involved in the entire design cycle.

 

So here in the United States, there was one deaf person who was very important within the state of Texas. And this person, was an aide in the war for independence of Texas from Mexico this person grew up as a deaf person and was very visual and gestural. And, and working with the indigenous populations here in the United States or in Texas at that time, gestures were very important. So he had the ability to provide information to many of the tribes of indigenous peoples in Texas. So he was able to leverage that gestural communication to help to win the war of independence from Mexico for Texas. So he was critical. And it’s important that we recognize his contribution, that it was so critical that it helped to win the war. And there are so many things over time that people have not been honored for. But oftentimes cities are named for people but not for people such as this person.

 

So the variety of contributions that are out there and the benefits, the diverse benefits really do impact a myriad of situations. So that’s where I wanted to close with the technology building piece, and also with my presentations.

 

Dr Manohar: perfect timing. Raja, thank you so much. I was just sending your message that we have three minutes, but you just closed perfectly on time. So the floor is open for questions from the listeners. Yes Prof Bala, please go ahead.

 

Prof Bala: Can you talk a little bit about the subjects that disciplines you have in Gallaudet University?

 

Dr.Raja Kushalnagar: Okay, so Gallaudet University started out as a liberal arts university. And historically, that would include English and then they’ve added many other fields, including technology, linguistics, communication, the various sciences, the university itself, has many, many different majors and fields, as well as many different minors. There are 40 different majors and minors in which our students can matriculate. And the schools are grouped together, I believe at this point, we have science, technology, accessibility, mathematics and public health. So that’s my current school. And, of course, there are other disciplines available and the other schools throughout the university.

 

Prof Bala: So the question that I wanted to know was, if it’s a technical university, the American Sign Language, is it rich enough? Or that is all visual? That part of it is visual, let’s say equations. You know, the parts which are, you know, let’s say even chemical equations, or, you know, differential equations, these things are visual, are these also signing for?

 

Dr.Raja Kushalnagar: Yeah, that’s a great question. So the teachers do use American Sign Language during four years of undergrad as well as any graduate studies. But the important thing is that we have enough people within the field that have common vocabulary. For example. If you have someone who is speaking and signing look at translation, you have to look at message equivalency between the spoken English and the signed rendering. And typically, sign language has twice the amount of information. And it takes twice as long to render that information. And it’s used especially when you’re talking about gestural communication around the world. So you have to think about the concept of translation into this different mode. Now technology and science. We have agreed on some signs in the vocabulary that will be used in perpetuity. And they match conceptually, and they follow the rules of the language itself. Now, some language or some signs have been borrowed, and may not always follow those set rules, but they’re there. And, yeah, it’s really easy to find the correct sign for what you’re looking for.

 

Dr Manohar: So I have another question from Akila inchat. Raja, if you could probably read the chat if I can request you.

 

Dr.Raja Kushalnagar: Right, so in developing technology that also preserves a person’s privacy when they create video content that has not yet been created, which means that the signer or the speaker only does lose that privacy. Now there are different people within our university that are developing different ways to present information through avatars, which adds the privacy but the technology is not there yet. We’re still working on it. And it still is probably yours to come me out.

 

Dr Manohar: May I add a comment to that. In the recent assets conference, there was a paper which tried to anonymize sign language videos. It’s again, a very big step, but not the final solution. But you could look at that paper. Very interesting work.

 

Another question from Namita. I also have a similar question.

 

Dr Raja Kushalnagar: And this has to do with the visual impairment. Yes. Here in the United States, there are people who are deaf-blind, or people who are deaf and low vision. And they do use different technological solutions. Now our Deaf low vision folks may have magnifiers that they use as an accommodation. And when it comes to sign language interpreters, they may be providing access much closer to someone than they normally would be. For people who are deaf-blind, there are different technologies that are using, such as Pro tactile sign language, in which its sign language can be felt. Gallaudet University has also developed its own system, or technology for use, as we are investigating the benefits of the technology, such as for robotics, and as I mentioned avatars. But again, all of that is still in the very early stages. Right, and human interpreters are still the best option.

 

Dr Manohar: I have a question. With respect to the best practices in presenting what you were describing. If the audience comprises of both deaf and visually impaired participants, as is the case today, when you present a visual and allow time for reading, the people who cannot see the visual are at a disadvantage. So how do you deal with that aspect?

 

Dr Raja Kushalnagar: Absolutely right. And my bad, I have to say my presentation was not accessible to people who are visually impaired and asked me to read the slides, oftentimes, we will have some type of audio description that is included as well, there may be a separate person that is reading the content on the slide for everyone that may need to access that information auditorily. And also, in movies, there are descriptions of what is on the screen, or what’s in a television program. Now we do have two separate types of captioning there, you can have both the captioning of the content, as well as captioning of the audio descriptions and audio descriptions on a different sound channel. And on American TV, it supports a second simultaneous auditory sound channel is called SAP so that the person is they know how to set that up on their television set. They can that second person would be able to access the reading of that content on that second channel.

 

Dr Manohar: Yeah, again, in this year’s conference, there was some paper that addressed this exact question of extracting text from the screen and presenting it as text to speech. But this is evolving technology. And I think diversity in the audience itself triggers so much innovation, the need for innovation. And so I’m very glad that this was brought up here. So any other questions from the audience? Raise your hands. Been a wonderful talk. We’ve been enjoying it.

 

Yeah, I think Namita also has a comment. That signing speed is an issue. When we integrate between a group with deaf deaf low vision and deaf life Having the diversity of the group increases the complexity of the solution that we need rapidly goes up.

 

Dr Raja Kushalnagar: Yeah, absolutely. And fortunately, the technology is catching up with that automatic speech recognition. So people are able to access that information visually by a computer. And I feel that that makes accessibility better for all. And right now, it’s still really hard and complex, but it’s getting there.

 

Dr Manohar: I think we will close this session. But I have a request to Raja that I see this as the beginning of the collaboration between Gallaudet and the community of people we’re working with, and for people with disabilities in India, and you will certainly hear from many of us post this conference. Thank you so much, again, for being with us and for giving this talk.

 

Dr Raja Kushalnagar: Absolutely. And I’m excited for the collaboration more.

 

Prof Amit: Thank you very much, Dr Raja, Dr Manohar., Jay, Priti and Jessica. Thank you for a wonderful session. So we’ll move on. We have a few announcements as part of the closing session of the conference. And I will request Dr. Akila Surendran, to talk about the student design challenge competition. And the winner of the student design challenge competition.

 

Dr Akila: Yeah, thank you. So we organized the student design challenge. We started in August, there were five problem statements that were given to the participants. And we had around 30 teams that initially registered. And then we had a way we had very rigorous rounds of detailed proposal submission. And we had three mentoring meets where the participants had to meet the mentors and present their progress in the project throughout. So given the COVID situation, and all the problems that it posed in collaboration with between teams, it was a challenging experience for the teams to keep up with this rigorous process. And we did see a lot of dropouts. But there were a few teams that stuck till the end. And we are really proud of the work that they were able to pull off in this brief time period of two months with the pandemic constraints. So we had the committee had a lot of discussion after the final presentation, which took place on Thursday, and one team clearly emerged, far apart from the other participants. And we are really glad to announce them as the winners of this challenge. Team Creative Crew, are you here?

 

 Swati you can introduce yourself.

 

Swati:. I’m Swati, I’m from Pune. I study at MIT LT University. Ayushi, Sehej and Sumedha are from IIITBMG Jabalpur. Thank you so much for this opportunity. And I thank all the mentors and the organizing team of EMPOWER, thank you so much.

 

Akhila: Yeah, Swati, we want you to take the project forward and not stop with this. It’s a really exciting and unique solution and we would like to see it reach the end-users.

 

Prof Bala: Can you briefly describe for the audience what, what was the problem you were working on.

 

Swati: Sure. So we were working on assistive switches basically to manoeuvre between electronic devices for people who have motor disabilities. So the problem statement one specific requirement that it has to be HID compliant and the costs should be reduced, we basically came up with a casing for a mouse that everybody generally uses at their home. So basically, that cuts down the cost as well as makes it very, very accessible for people who want to use it as a switch.

 

Anil Prabhakar: Is it possible to share some of the designs right now? Do you have an image that you can show?

 

Akila: Yeah, so the prize amount that Swati and team is winning is 25,000 rupees. So have fun Swati and team.

 

Swati: Thank you so much.

 

Anil Prabhakar: Can I share my screen? So the audience can see what it is. Swati, maybe you want to just talk them through this? Or maybe one of them?

 

Swati: Sure. So I think all the images are of a similar type. It’s basically a mouse over which there are coverings that can either be 3D printed or just folded with cardboard or sheet metal, they would act as coverings for the mouse, which basically when tapped at any point, any word translated into a left-click or right-click that can be used the as an assistive switch.

 

Anil Prabhakar: I just want to say that this is very creative way of solving the problem. Most people like me would have started by designing the mouse. But I think having them come up with the casing meant that a lot of the production costs, as well as the difficulty of getting all the technology right, was removed. And they could focus on from a user’s perspective on what makes it functional. I really think that was a very refreshing approach that they have used. And we hope to see many of these switches being used by people who need that. So thank you very much for that Creative Crew. Very

 

Akhila: I can actually show their prototypes.

 

[Video plays]

 

Anil Prabhakar: So under this is the actual mouse that they have attached, and they’ve just got a casing on top of it. So you can take any commercial mouse and put it into this casing. And it will work as a single click or double click depending on what casing you choose.

 

So you can see the mouse that are sitting under that folded cardboard out there. So in this case, again, the mouse is not something that they have designed, they’ve just used the casing around the mouse. So I think that’s very innovative that it means that one doesn’t have to actually source it from a specific vendor. You could go to Amazon, Flipkart and buy your mouse that you think you like and slide it into a casing like this and it becomes accessible switch for someone with fine motor impairments.

 

Akila: They share something that I observed, this team spent a lot of time on ideation and brainstorming. And I was wondering, oh, they have so much technical work to do after this, because the problem was to design a switch with the electronics and vice versa as a mechanical engineering. And then this team took something that’s existing and just put a casing over that. And that was very innovative and surprised for all of us.

 

Prof Amit: Thank you. Thank you all, for conceiving of the student design challenge, as part of Empower investors the first time we did it. And it was so good to see all these teams participate, or the few invest a lot of time with these teams. And we do hope that Creative Crew and the sport take it forward. I hope even the next editions of Empower will have a greater engagement with students. And we’ll try to see how we can make it much more attractive for students to come into the field of assistive technologies. So congratulations, Creative Crew, and thank you, the entire team of mentors, put the student design challenge together.

 

So we have had a long, slightly long conference, right, so so we started on at 10 am on Thursday, and we have been on almost 10 hours of the day, last three days. And I’m sure all of you are tired. But it has been such a fantastic and interesting experience. Listening to people seeing the wonderful work going all around, and I hope we will we are kind of refreshed. You’re kind of rejuvenated. And we are looking forward to the next edition very keenly. I’m very keen. So let me invite prof Anil Prabhakar of IIT Madras, and Dr. Akila Surendran of the National Institute of Speech and Hearing, Trivandrum to tell us more about what we should look forward to in Empower 2022.

 

Anil Prabhakar: Well, so thank you Amit and we would like to thank the Empower program committee for giving us the responsibility of Empower 2022. What we have envisaged is that all we have to be dreaming of is that we will have an in-person, Empower 2022. It will be held in Trivandrum, hosted by the National Institute of Speech and Hearing. So the Organising Committee, chair of the Organising Committee is Akhila and I am handling the program committee.

 

We will we look forward to having everyone in the beautiful state of Kerala, especially foreign visitors who will find that this is God’s own country. And in our wonderful time, we do hope that others will also come there was some discussion within the organizing committee of Empower about where do we have it and this is actually supposed to have been the fourth edition of Empower. And we’ve had two of them in the north. And now we’ve had at IIIT Bangalore which unfortunately we were not able to travel to but it was in the south. And we are hoping that people will be able to attend physically at Trivandrum. I look forward to having you all there. From the perspective of the program committee. What I’m going to do is send out mailers to people requesting feedback on the program of Empower 2021 and suggestions for what you would like to see in Empower 2022. We will of course have our keynotes, we will have our invited talks, and we will have sessions.

 

As Amit said, We will also continue with our student design competition, I think it was a very good experience learning experience both for us as organizers, as well as for the students. And if we are post-pandemic, we expect that we will have good healthy participation in the student design competition. So do look out for the mailers from us about the suggestions for the program for Empower 2022 and respond to them. We look forward to hearing from all of you about what you thought about Empower 2021. And what do you think Empower 2022 should aspire to be. Akila do you have a presentation for them?

 

Akila: Yeah, I have some slides to attract them to get to register for 2022. So yeah, I wanted to start with saying how much the Empower conference means to me as a professional as an AT professional in India. So I remember attending the first two editions at IIT Delhi. And that was my when I met a lot of people, a lot of connections, that were very useful to the work that we are doing in our center right now, a lot of collaboration started at this conference. And it’s all I’ve always look forward to it and later becoming a part of the program committee taking, taking charge in organizing the sessions. And now very happy to be hosting the next edition of Empower at our institute.

 

And it’s also another sense of why I feel happier about it is earlier it was in technical institutes, right like IIT Delhi, and now we are hosting it in a rehab Institute. So where there are persons with a lot of persons with disabilities, so I think that also adds another dimension to the conference, which is nothing about us without us. Right. So I’m very happy to be part of this movement. And yeah, when you say Kerala, as Prof. Prabhakar said, we think of beaches, we think of green, the green spread, and it’s God’s own country, and it’s a tourist paradise. So anyone coming for Empower, you’re sure to have a good time when you come to Trivandrum next year, so this is Kovalam on the backwaters of Kerala you can see that

 

Kerala know is growing as home to the Fab Lab movement and the Kerala startup mission culture. So in technical areas, also, it’s growing a lot. So at Empower, we hope to tap into some of these areas and organize some events around this. Around this and make use of the infrastructure that is available. So here’s a short video.

 

[Video Plays]

 

I know you can find maker labs everywhere, but in Kerala, it’s available to the public to book these machines at very low cost. And there are many Fab Labs in almost every engineering college in every district, throughout Kerala and so students are very familiar with these digital fabrication prototyping techniques. And yeah, this is our beautiful NISH campus.

 

[Video Plays]

 

Yeah, and that’s been another great addition recently to the NISH campus which is we’ve made the campus more accessible. We’ve tried to make it fully accessible to persons with disabilities. Here’s an example of an accessible ramp. Like earlier, the ramp, the person, the auditor, who came to accessibility auditor called it a rocket launcher, the ratio was all wrong. And now we have smooth and everything is on as per the accessibility guidelines. You can see the tactile tiles there, the handrails at two levels. One is for adults, and the other is for children. So we are very familiar with all these building accessibility guidelines and everything is accessible for hopefully everything is accessible to persons with disabilities.

 

We also have enough space on campus to have some fun. We have a very nice auditorium where we can have these talks and maybe some cultural as well. Yeah, so see you at NISH for Emperor 2022. Thank you.

 

Anil Prabhakar: Thanks Akila. I had forgotten what an auditorium and a cultural event look like.

 

Akila: I know it can be extra. I mean, it’d be easy to fulfil expectations, because it’s been two or three years since anyone had any kind of get together.

 

Prof Amit: So thank you, everyone. I would just like to take a couple more minutes and like to express my appreciation gratitude to everyone who made the organization of this conference possible. I should, first of all, thank all the participants and even the third day of the conference late almost a late evening session if they see 30 people. I think we owe it to all of you for your wholehearted and enthusiastic support to the conference. So thank you very much, everyone, who registered. Thank you very much for participating very enthusiastically in all these sessions.

 

I should thank all our speakers, our keynote speakers, Dr. Raja Pasha that is here. But all our other keynote speakers have invited speakers all of whom very gracious, graciously accepted to come and talk at Empower at inconvenient times for them from different parts of the world. All our paper authors, the workshop coordinators, instructors, as well as the product demonstrators. I should thank all our interpreters ISL interpreters Priti, Aniket, Manisha, Atul and Vanshika for their support throughout the conference sessions. And also the ASL interpreters in the last session, Jessica and Jay.

 

My gratitude and appreciation to Vidya for the invocation song and Swathi for beautifuly video editing that song with pictures from the triple IIITB campus. We love the rendition of we shall overcome by the school children. Thank you, all, all the children for setting a good tone to the conference. Thank you Nibin, Rajeshwari and the vision and PR team who were involved in its coordination, and Richard for video editing and production

 

My gratitude to Vasanthi Harirakash and her team at Pickle Jar for the teaser video and for curating the film festival on Inclusion. We are also very thankful to our sponsors Microsoft and Google for the support which allowed us to open up the conference sessions to a wider participation and we look forward to your continued support even for EPOWER 2022 and beyond.

 

And very thankful to all my colleagues on the program committee. Professor Bala Krishnan Dr Manohar, Mr Mohan Sundaram, Mr Anil Prabhakar, Dr. Akila, Dr Charutar Jhadhav, Mr Depender, Dr Namita Jacob, Sujit, Prof Rao, Prof Mukta Kulkarni, Prof Gaurav Raheja, and everyone else who has always been very supportive and have been enthusiastically contributing to the conference planning and associate associated activities very patiently throughout the past one year.

 

Lastly my colleagues at IIITB, our present director Prof Das, former director processor Copan. For all their continuous guidance and support. My colleagues and associates at the Center for Accessibility in the Global South at IIITB. Aarti Varghese for taking care of the website and registrations, Nikita, Nibin, Aishwarya, Ritik and Gayatri for ensuring the smooth conduct of the sessions and for providing all the required backend support.

 

I would also like to thank Vignesh andushpa Rao, for taking care of payments and other financial aspects and Vivek Yadav and Vishnu for IT support and guidance. It has been a wonderful experience to be involved in the organizing of Empower 2021. We look forward to working along with all of you towards creating an AT ecosystem, which is more inclusive and accessible. We all are looking forward to meet in person next year at Trivandrum. Till then, please take good care of yourself. And stay safe.

Scroll Up