Healthcare AR

This week we continued exploring the possibilities of AR by trying to find a practical application for it within the healthcare industry. The intended audience should be either people that work in healthcare, such as doctors, nurser, medical students etc. or people receiving healthcare, for example hospitalized patients, handicapped people and others.

Our group discussed about possible use cases for an AR healthcare application. The problem was that most of our ideas had been already implemented in the past  either successfully or not. One of the use cases that we liked was to use AR in order to scan part of a person’s body (i.e. his arm) and highlight the veins, thus making it easier to take a blood sample. We found out, however, that the same result could also be achieved by highlighting the desired area using a specialized light source.

Despite our frustration, we kept trying to come up with another use case. We focused on the problems a modern day surgeon might have to face. A typical surgery room is packed with various machines and monitors, in order to visualize data for the patient. The doctors need to pay close attention to this data at all times and that may be a bit confusing, especially when an average surgery session lasts several hours.

Example_of_Philips_Hybrid_OR

We started questioning how a surgery room could be optimized and came up with an idea for an app that uses AR to display on real time the desired data received from sensors attached to the patient. This application could also benefit military doctors as well, who may not have access to the luxuries of a surgery room when they need to provide first aid in a battlefield.

h1

The next step in our process was to find the appropriate hardware for this type of application. There are currently many different technologies used in order to render AR content and these include handheld devices, eyeglasses, head-up displays (HUD), optical projection systems and other systems worn on the human body. We concluded that a wearable mobile device would fit best and one of the most obvious choices was the Google Glass.

h3

So the general idea was to incorporate AR functionalities in a mobile wearable device such as the Google Glass. This would provide the user with sensor data displayed on a part of the device’s UI at all times, without limiting the user’s vision. Which means that a surgeon could monitor the patient’s heartbeat, temperature, blood pressure etc. without having to move his head and look around the room, but instead focusing on his tasks. An example of how this might look:

h2

Due to not having enough time to come up with a prototype, we created a proof of concept for the app and made a presentation in class. One this worth mentioning is the pros and cons analysis. While an application like this would minimize the need to have extra equipment for displaying data in a surgery room and also help the doctors maintain their focus, it would also have a few major drawbacks.

The first one is the battery life. Wearable smart devices still cannot be reliable enough to last more than a few hours under moderate usage. The other drawback is data accuracy. How reliable would be the data received from the sensors and more importantly, how fast would it be displayed to the device? In case of a delay even in milliseconds the risk would be significant with too much at stake.

Healthcare seems to be a really complicated field for new AR applications to prosper. In my opinion,one should instead focus on improving the hardware’s efficiency and performance first and build strong foundations for future development, without worrying too much about hardware performance.

Magic Book

Our first group project for the Mixed Realities course was to create our own “Magic Book”. That meant we had to find a book and use augmented reality (AR) in order to enhance the reading experience in a meaningful way.

Well first thing’s first, so we went to the university’s library and started searching for an appropriate book. The basic idea was to find a book with big images, which we could then use as markers and scan them with our AR application. We found the section with books for kids and started checking a few of them out. We came across a danish book with lots of images, which purpose was to teach kids the words for various items.

IMG_20170510_120548
The cover of the book we used for our app

We thought it would fit perfect in our assignment, as we could use AR to display 3D model representations of the items and have the users interact with them, thus making the book truly interactive.

As you can see in the slider below, each page of the book includes pictures of various items and their corresponding names in Danish. So we decided to use each page as a marker for our app and also create a small quiz in which the users would have to guess the name of the item in English.

This slideshow requires JavaScript.

Since we had only one week until the deadline, we only worked with the first 3 pages. We assigned one page to each member in our group for which we had to find 3D models online for a few of the items on the page. Then we had to create a scene in Unity, import the 3D models and adjust their location and size to fit the page.

As soon as we had everything we needed, we combined them in one common project and starting working on the user interface (UI) and the quiz. Luckily, it didn’t take us long to finish this part, adding questions for each of the items displayed in the app. We also did some hand drawing for the 2 main characters in the book and used them in the quiz interface as you can see here:

c2

The final prototype was presented in class and received positive feedback from our teachers and classmates. Here’s a short demo:

giphy

The good part about this project is that a lot of new features can be implemented to improve the user experience significantly. Features such as sound effects, voice over for narrating purposes, interactions with the 3D models like spinning, changing their color or resizing them etc.

All in all it was a very positive experience for me that provided me with extra motivation for the Mixed Realities course and also gave me a few ideas for some side projects. One of them is (Spoiler Alert) to create an AR version for a book called “All my friends are dead”, which is one of my favorites. It is an illustrated humor book showcases the downside of being everything from a clown to a cassette tape to a zombie.

all my friends are dead cover

An introduction

“Reality leaves a lot to the imagination.” – John Lenon

Virtual Reality, Augmented Reality, Mixed Reality, AR, VR, MR… words and terms that we here more and more lately. Either when talking about new technologies or new forms of entertainment like movies, games etc. But let’s take a closer look on what they actually mean.

VR is short for Virtual Reality and is the umbrella term for all immersive experiences, which could be created using purely real-world content, purely synthetic content or a hybrid of both.

Various forms of visual media would never achieve what VR can, because they lack the most desirable of all elements required for entertainment – Interactivity.

Videos and photos – be it normal or panoramic or 360 or 3D or 360 3D – are passive in nature. They can only be immersive up to a certain limit. Beyond that, we need something that reacts and responds to the stimulus given to it.

Photos and videos are out. How about 360 games? Games are interactive, aren’t they? So, can 360 3D games become the most ultimate form of entertainment?

Take a second and think. Which of the following do you think is more involving:
A film about Football, like Goal?
A computer game about Football, like FIFA 17?
Or Football – the one that you actually play on the fields? Now you will know.

Augmented reality or AR is an overlay of content on the real world, but that content is not anchored to or part of it. The real-world content and the virtual content are not able to respond to each other.

IKEA has developed a table as part of its concept kitchen that suggests recipes based on the ingredients on the table, which is a great example of AR working in the real world, potentially. Google Glass was a first attempt from Google to bring augmented reality to consumers and we’d expect to see more of this in the future.

MR or Mixed Reality is an overlay of synthetic content on the real world that is anchored to and interacts with the real world—picture surgeons overlaying virtual ultrasound images on their patient while performing an operation, for example. The key characteristic of MR is that the synthetic content and the real-world content are able to react to each other in real time.

Hardware associated with mixed reality includes Microsoft’s HoloLens, which is set to be big in MR—although Microsoft have dodged the AR/MR debate by introducing yet another term: “holographic computing”.

This blog, as mentioned in the About page, will act as a documentation tool for a Mixed Realities course, in which I will try to write down my thoughts and reflections about each course assignment. I will try to avoid too many technicalities about the implementation stages and rather emphasize on the design, features and further opportunities for each project.