Next Steps

So, I’ve done some debugging of the Parser class. Most bugs that I found were typos or cases where I missed a “this.” or a class name before a constant. Since I catch these problems only if the code is executed, I think it is not very efficient to try to debug everything at this point; it should be easier to check more examples once all of the code is ported to JavaScript.

It also became clearer what the next steps should be. For a long time I had assumed that I would port the rendering of graphics before the evaluation of symbolic expressions (i.e., in the order in which these features were implemented originally). However, I have implemented large parts of the Expression and Evaluator classes already in order to debug the Parser class; thus, it’s more natural to finish the implementation of these classes before starting with the rendering of graphics.

Thus, the next steps are probably:

  • evaluation of expressions (Expression and Evaluator classes)
  • preprocessing of graphics primitives (Graphics3D class)
  • rendering of graphics primitives (except formatted text) (Graphics3D class)
  • processing of graphics options (Graphics3D class)
  • rendering of formatted text (Graphics3D class and TextElement class)
  • user interaction (Live class)

I think most of this should be straightforward … maybe with the exception of formatted text. And mouse events are probably as weird as they are in Java applets. And it only gets worse if I want to support single-touch, multi-touch, and touch-pad events on mobile and desktop systems running Windows and macOS …

OK, I better stop worrying about future stuff now.

Martin

The Dummy-In-Chief

I had to implement another class for debugging: ExpressionElement. It’s just a container class without any methods. The Java class is:

class ExpressionElement
{
 public int token;
 public double value;  
 public int nesting;

 public ExpressionElement(int new_token, 
  double new_value, int new_nesting)
 {
  token = new_token;
  value = new_value;
  nesting = new_nesting;
 }
}

And my first version in JavaScript was (using a bit of JSDoc syntax):

/**
 * @class lg3dExpressionElement
 * @classdesc A container class for elements  
 * (i.e., nodes) of symbolic expressions. 
 *
 * @desc Create a new lg3dExpressionElement object.
 */
function lg3dExpressionElement(new_token, new_value, new_nesting)
{
 token = new_token;
 value = new_value;
 nesting = new_nesting; 
}

The problem is that in JavaScript, I have to write “this.token”, “this.value”, and “this.nesting” instead of “token”, “value”, and “nesting”. Interestingly, I wasn’t able to debug the code in FireFox and had difficulties with Chrome. The worst part was that for more than an hour, I suspected that the problem was the WLAN connection and/or a FireFox update.

On the positive side, I’ve probably learned an important lesson for the rest of my life (or at least a few weeks).

Martin

More Dummies

Today, I had to code another dummy implementation of a class – namely the “TextElement” class, which represents a node in a formatted text expression. LiveGraphics3D supports some text formatting, including basic mathematical typesetting, in particular subscripts and superscripts. I had to add this feature many years ago when I tried to use LiveGraphics3D to show interactive graphics that were based on the figures of a text book that used somewhat complex labels. Which shows that trying to solve actual problems can help a lot to understand the real nature of these problems.

Or as I like to put it (with a view on other educations at my university): If you don’t create a working prototype of your solution, probably you don’t even understand the problem that you are trying to solve.

Anyway, I hope I’ll be able to start with the testing of the Parser class tomorrow.

Martin

Picking it up again

After a pause due to exams and holidays, I’m continuing with the porting of the Parser class. I’ve gone through the code of the Parser class now, but I still need to debug the JavaScript code and for that I need a few more dummy implementations of functions of other classes. And test cases for the debugging. Many test cases.

Debugging is an interesting activity: while it’s work, it’s often similar to solving a challenging puzzle that requires exploration of what’s actually going on (not unlike a criminal investigation). In that sense, debugging has some features of (explorative) play, which I find quite motivating. Writing test cases is different: one has to write test cases to reveal problems, which then have to be debugged. Thus, writing test cases is work that leads to even more work, which is a lot less motivating.

On the other hand, each test case is passed at some point, i.e., there is a happy end to each and every test case. Think positive! 🙂

Martin

Half-time for Parser

I’m halfway through with my first pass of the Parser class – only parsing of options, variables, and expressions is missing. I’ll try to get the full parsing to work first because then I can easily test it with many examples before moving on to rendering graphics and evaluating expressions.

I saw quite a few things that I didn’t remember about the applet. For example, the support for arrays of constant numbers. I remember that this was motivated by the problem of using variables as indices. As far as I remember, these arrays make it possible to use variables as indices for arrays of constant numbers – even multidimensional arrays. However, I think I never tested it sufficiently and therefore it is not documented.

Also, I forgot about “If”s around graphics primitives. Basically, all geometric primitives always exists but based on the values of the “If” conditions around them, primitives are activated or deactivated. I had forgotten how awkward this part of the code is.

Actually, the code of the Parser class feels quite awkward in general. It probably would have been much better first to build up a hierarchical data structure representing a general Mathematica expression during the parsing and only then to process the data structure based on the “Head“s of the nodes.

Hindsight is a wonderful thing.

Martin

Whatever you do, be passionate about it

Science is not only a disciple of reason, but, also, one of romance and passion.

Stephen Hawking

While going through some links to viewers for Mathematica graphics (including MathGL3d), I learned that Jens-Peer Kuska died in 2009. We never met personally, but in the early 2000s, I read many of his postings in Mathematica forums and we exchanged a few e-mails. His passion for Mathematica, mathematical visualization, and real-time computer graphics had a profound influence on LiveGraphics3D. His death is a great loss.

Martin

 

Jesse Schell on teaching mathematics in VR

A few days ago, I watched a keynote on “The Future of Virtual Reality and Education” by Jesse Schell at Games for Change 2016 on YouTube. Here is what Jesse Schell said about teaching mathematics in VR (at 11:20 in the video):

Mathematics, so abstract, can be very hard for people to grasp. What about now when you can literally grasp mathematics with your hands and manipulate numbers and formulas and graphs and equations and get an intuitive kinesthetic sense of the way that mathematics works? I often think about Einstein talking about how he figured out relativity. He said he often likes to imagine that he was flying around  in space riding on a light beam, and it was by that imagining that he was able to put it all together. And virtual reality is certainly a tool for imagination.

In a sense, this is what LiveGraphics3D’s direct manipulation of parametrized graphics is all about. What is different in VR? Let’s make a list:

  • In VR, it’s easier to see, understand, and manipulate 3D objects. For 2D graphics, however, there might not be much of a difference in these respects.
  • In VR, all graphics (3D and 2D) have an actual size that is known when the graphics are created. For graphics on computer screens, this is not the case because the size of the screen of the end-user is not known. In VR, one can therefore show models at a certain scale or even in their actual size.
  • VR provides more immersion and less distraction. This could make it easier to focus on the interaction with the graphics. Users might also be more curious and more motivated to explore the possibilities for interactions since there is nothing else to do and because the manipulation might feel more real (and therefore more “meaningful”) than on a computer screen. (This could be amplified by additional auditory and/or haptic feedback.)
  • In VR, it might be more natural or useful to have a meaningful background (e.g., a 360 degrees image or video) to provide context and/or to show an environment that supports the experience (e.g., by improving memorization).

There are also some challenges in VR that need to be addressed:

  • The potentially large size of objects in VR means that in some cases users probably want to point at things that they want to drag instead of grasping them (because some things are just too far away to reach them).
  • An implication of the feeling of more real or meaningful interaction might be that users expect that their interactions have more meaning, e.g., that any manipulation of a variable in one object should automatically update the same variable in all objects in the virtual environment. Also, users might expect more interactions than on screens, e.g., taking things apart, combining things, etc.
  • In VR, it’s more difficult to read text. Instead of text, it would be better if a prerecorded voice can walk users through the experience by providing explanations and ideas for ways to manipulate the graphics, commenting on the way users manipulate the graphics, etc. Of course, in this case it is crucial how well the voice reacts to the actions of the users. On the other hand: static text isn’t particularly good at reacting to users.
  • In VR, social interaction (specifically, human-to-human communication) often has to be mediated, i.e., the software often has to offer special features to support social interaction. (HTML pages usually don’t need to do anything to support social interaction: several co-located users can read, point at, and discuss a page at the same time.) Support for social interaction in VR could be simple things like mirroring the display of the VR user to a computer screen for other users to see, and also representing the mouse cursor on the computer screen within the VR display if the mouse cursor is over the mirrored VR display. This would already support pointing for all users.

Well, I guess there are a couple of interesting issues that one could research … if one finds the time. 🙂

Martin

Where we are and WebVR

So, I’m planning to port LiveGraphics3D to a web app using the 2D context of the HTML5 canvas element. In the long run, it should also support the WebGL context of the HTML5 canvas element, which would provide hardware-accelerated 3D graphics rendering but requires more programming.

Now, I had a look at the draft of the WebVR API. Fun fact: for the rendering it relies on a canvas element instead of a WebGL context. Which means that a canvas element with a 2D context might work fine with WebVR. Which then means that it might be relatively easy to make LiveGraphics3D 2.x work in VR – even before it supports WebGL. How crazy would that be?

Of course, if that works, I would like to have not only one Graphics3D object in the virtual environment but many! And one should be able to pick them up with one hand while manipulating them with the other! And throw them around and into each other! And do bottle flips! Errrr … well, maybe not bottle flips.

But maybe the final specification of WebVR will require a WebGL context instead of allowing a 2d context. We’ll see.

Martin