This week we continued exploring the possibilities of AR by trying to find a practical application for it within the healthcare industry. The intended audience should be either people that work in healthcare, such as doctors, nurser, medical students etc. or people receiving healthcare, for example hospitalized patients, handicapped people and others.
Our group discussed about possible use cases for an AR healthcare application. The problem was that most of our ideas had been already implemented in the past either successfully or not. One of the use cases that we liked was to use AR in order to scan part of a person’s body (i.e. his arm) and highlight the veins, thus making it easier to take a blood sample. We found out, however, that the same result could also be achieved by highlighting the desired area using a specialized light source.
Despite our frustration, we kept trying to come up with another use case. We focused on the problems a modern day surgeon might have to face. A typical surgery room is packed with various machines and monitors, in order to visualize data for the patient. The doctors need to pay close attention to this data at all times and that may be a bit confusing, especially when an average surgery session lasts several hours.
We started questioning how a surgery room could be optimized and came up with an idea for an app that uses AR to display on real time the desired data received from sensors attached to the patient. This application could also benefit military doctors as well, who may not have access to the luxuries of a surgery room when they need to provide first aid in a battlefield.
The next step in our process was to find the appropriate hardware for this type of application. There are currently many different technologies used in order to render AR content and these include handheld devices, eyeglasses, head-up displays (HUD), optical projection systems and other systems worn on the human body. We concluded that a wearable mobile device would fit best and one of the most obvious choices was the Google Glass.
So the general idea was to incorporate AR functionalities in a mobile wearable device such as the Google Glass. This would provide the user with sensor data displayed on a part of the device’s UI at all times, without limiting the user’s vision. Which means that a surgeon could monitor the patient’s heartbeat, temperature, blood pressure etc. without having to move his head and look around the room, but instead focusing on his tasks. An example of how this might look:
Due to not having enough time to come up with a prototype, we created a proof of concept for the app and made a presentation in class. One this worth mentioning is the pros and cons analysis. While an application like this would minimize the need to have extra equipment for displaying data in a surgery room and also help the doctors maintain their focus, it would also have a few major drawbacks.
The first one is the battery life. Wearable smart devices still cannot be reliable enough to last more than a few hours under moderate usage. The other drawback is data accuracy. How reliable would be the data received from the sensors and more importantly, how fast would it be displayed to the device? In case of a delay even in milliseconds the risk would be significant with too much at stake.
Healthcare seems to be a really complicated field for new AR applications to prosper. In my opinion,one should instead focus on improving the hardware’s efficiency and performance first and build strong foundations for future development, without worrying too much about hardware performance.