Jesse Schell on teaching mathematics in VR

A few days ago, I watched a keynote on “The Future of Virtual Reality and Education” by Jesse Schell at Games for Change 2016 on YouTube. Here is what Jesse Schell said about teaching mathematics in VR (at 11:20 in the video):

Mathematics, so abstract, can be very hard for people to grasp. What about now when you can literally grasp mathematics with your hands and manipulate numbers and formulas and graphs and equations and get an intuitive kinesthetic sense of the way that mathematics works? I often think about Einstein talking about how he figured out relativity. He said he often likes to imagine that he was flying around  in space riding on a light beam, and it was by that imagining that he was able to put it all together. And virtual reality is certainly a tool for imagination.

In a sense, this is what LiveGraphics3D’s direct manipulation of parametrized graphics is all about. What is different in VR? Let’s make a list:

  • In VR, it’s easier to see, understand, and manipulate 3D objects. For 2D graphics, however, there might not be much of a difference in these respects.
  • In VR, all graphics (3D and 2D) have an actual size that is known when the graphics are created. For graphics on computer screens, this is not the case because the size of the screen of the end-user is not known. In VR, one can therefore show models at a certain scale or even in their actual size.
  • VR provides more immersion and less distraction. This could make it easier to focus on the interaction with the graphics. Users might also be more curious and more motivated to explore the possibilities for interactions since there is nothing else to do and because the manipulation might feel more real (and therefore more “meaningful”) than on a computer screen. (This could be amplified by additional auditory and/or haptic feedback.)
  • In VR, it might be more natural or useful to have a meaningful background (e.g., a 360 degrees image or video) to provide context and/or to show an environment that supports the experience (e.g., by improving memorization).

There are also some challenges in VR that need to be addressed:

  • The potentially large size of objects in VR means that in some cases users probably want to point at things that they want to drag instead of grasping them (because some things are just too far away to reach them).
  • An implication of the feeling of more real or meaningful interaction might be that users expect that their interactions have more meaning, e.g., that any manipulation of a variable in one object should automatically update the same variable in all objects in the virtual environment. Also, users might expect more interactions than on screens, e.g., taking things apart, combining things, etc.
  • In VR, it’s more difficult to read text. Instead of text, it would be better if a prerecorded voice can walk users through the experience by providing explanations and ideas for ways to manipulate the graphics, commenting on the way users manipulate the graphics, etc. Of course, in this case it is crucial how well the voice reacts to the actions of the users. On the other hand: static text isn’t particularly good at reacting to users.
  • In VR, social interaction (specifically, human-to-human communication) often has to be mediated, i.e., the software often has to offer special features to support social interaction. (HTML pages usually don’t need to do anything to support social interaction: several co-located users can read, point at, and discuss a page at the same time.) Support for social interaction in VR could be simple things like mirroring the display of the VR user to a computer screen for other users to see, and also representing the mouse cursor on the computer screen within the VR display if the mouse cursor is over the mirrored VR display. This would already support pointing for all users.

Well, I guess there are a couple of interesting issues that one could research … if one finds the time. đŸ™‚

Martin