Flex Framework – PureMVC

I’m going to use this post to talk about my experimentation with PureMVC, which will happen this week. In the meantime, I wanted to post a link to a presentation I just watched. These two fellas in the presentation did a through review of the various frameworks available to Flex developers and found that the Adobe solution, Cairngorm, is workable, but that the superior framework title goes to…. {drum roll} PureMVC! So I feel validated.

[Review of Flex Frameworks]

PV3D Version and choice of Micro-architecture

The latest progress in my project involves two specific areas. The first is in my choice of 3D rendering framework for the project. As my “Paradigm” requires the ability to visualize data in *three* dimensions, and Flash only renders things in *two* dimensions, I need some way to bridge that gap. I could write my own three dimensional routines… yeah, right. Totally doable, totally not doing it for this project. Instead, I’ve decided to use a pre-existing Actionscript based 3d rendering framework known as Papervision3d (PV3d). This software provides an API for creating and animating 3D objects in any Flash based technology, including Flash, Flex, and AIR.

There are currently two major branches of  PV3d: 1.x and 2.x (Great White). While the 1.x branch has been around longer and is more stable, I have chosen to use the Great White branch for my project for two reasons. Primarily, I am using Great White for performance reasons. Due to a number of changes in the architecture of the framework, this newer version is capable of displaying many more objects at any given time with a much smoother framework. Secondly, Great White has some new features that I will find extremely useful; The two most important features to me are new shaders and the ability to place interactive objects such as buttons and textfields on the faces of the 3D objects.

I have implemented a sample application using PV3D GW which displays a set of 3D axis’ as well as a number of shaded cubes which randomly change position. To test the feasibility of mixing and matching Flex components (which I will use for user settings and some controls) with the 3D objects, the test application allows the user to turn a rotation property on the cubes on and off with a Flex checkbox. This all works well and I will be moving forward with attempting to implement a very basic Zoomable User Interface test case next.

The second area in which I have made progress is in deciding on a micro-architecture to base “Paradigm” on. For a project which involves creating an extensible environment like I am creating, it is important to choose an architecture that uses a highly decoupled Model View Controller (MVC) layout. This means that someone authoring new functionality for the application can write their code in such a way that the routines responsible for data handling, visual rendering, and controlling the application do not necessarily need to know a lot about each other. They simply need to be able to pass requests back and forth in such a way that each section of the application can decide how to handle the requests on their own.

After reviewing several micro-architectures such as Adobe’s Cairngorm, the community project EasyMVC, and the community project PureMVC, I have settled on the latter. PureMVC provides for a very “strict” MVC design which includes what is known as a Facade to control the various parts of the application. The Facade sends control messages called Commands to the Mediators (visual renderers) and Proxies (data handlers).In this way once my Paradigm environment is up and running, someone attempting to add functionality can create “modules” that utilize these Commands, Mediators, and Proxies which can be dropped into the application folder and then communicate with the built in functionality at will.

While I have experience with Cairngorm (Adobe’s micro-architecture) as well as my companies’ internal architecture, this will be my first time using PureMVC.

Progress?

The last month has been hectic, to say the least. I’ve recently started working at Cynergy Systems, Inc. and have been wrapped up in work as well as “officially” moving to NYS, among other things. Thanks to that I have gotten little material work done on my project. I have however been thinking about it and letting some ideas and plans percolate. Because of that I’ve believe I am at the point where I should begin creating some live mockups of some of the functionality and begin to build out my API in code.

I was originally going to use a more traditional method of building out my API like diagramming and such. I have however instead decided to start building out some mockup classes to build a scaffold for the API and flesh out my ideas. As I develop this scaffold I will document my thought process and decisions in comments in the code as well as Subversion commit comments and this blog.

In order to better facilitate regular work I will be meeting with my fellow capstone students in regular weekly meetings once or twice a week.

Some notes on what I’ve been working on. I will indeed be using Adobe Flex in the AIR environment for my project. While I’ve been thinking this throughout most of my project, I am now certain I have no other reasonable choice, given my skillset and the options in the field.

Within Flex I will be designing the application with a modular design using Flex Modules. The core functionality will be implemented in a few simple modules: administration/plugin management, visualization, and configuration. The visualization module will be able to access resources and settings from plugin modules and will serve as a controller for them.

At this point I am still trying to balance ease of plugin development with the ability to create highly unique and powerful plugins that don’t rely too heavily on the application itself. I want to make development for this system accessible, but at the same time don’t want every plugin to look the same. This may mean making a two-tier system which allows simpler plugins to rely heavily on the system and more complex ones to override it.

Comments?

Week 3 – Not much

So it’s week 3 and I don’t think I’ll be getting too much done this week. I’m subbing as a TA for a friend as well as several other important things. I’ll have to play catch up over the Christmas break.

Week 2 – API Design

I have been doing a great deal of thinking about the architecture of my application as a whole as well as how the plugin system will work. I have attached photos of some of my sketches to this post to illustrate my progress to date. I am having trouble deciding on a few key points and have had some changes of heart. First and foremost, for this initial version of the application I think I may require plugins to specify both data import and visualization parameters in the same plugin. It may be too difficult to implement a fully uncoupled system in my timeframe. This means that a plugin must define a data source, do the heavy lifting of importing it, define what information is in the data source and how it’s related, and what visualization parameters to use. In my architecture sketch, this means that the data plugins and the visualizer plugins are simply two parts to the same plugin. As can be seen in that sketch, the plugin will be providing information to be used throughout the application flow.

In the application flow storyboard, we can see some of the concepts from last weeks post being applied in the form of overview, zooming, filter, and details on demand.

I have begun to define some basic data types that will be included in the application and which plugin’s will be able to map their imported data to. For example, if a plugin imports a collection of images, it will be able to bind the image meta-data to a built in image data type for use in the engine. Each of these data types will have some generic information that is relate-able to other data types. For example, most information chunks have some sort of title, subject, etc. Since it’s digital information, they’ll also have a size (in terms of bytes). The file type itself will be an important piece of generic information. Others are listed in one of the pictures.

I am still deciding on whether or not this prototype will allow dynamic data loading or if I will require all data to be loaded when the user chooses a data source. I may be able to allow the plugin authors to decide this for themselves.

* WPG2 Plugin Not Validated ** WPG2 Plugin Not Validated ** WPG2 Plugin Not Validated ** WPG2 Plugin Not Validated *

Week 1 – Readings

Over the past week I have been thoroughly reading through all of the materials I discovered during my literature search. As I find useful bits of information I write them down in a notebook. So far I have some good notes from 4 books out of the 8 that I have read so far.

The first book which has some really useful info was Jef Raskin’s book on Interface Design (Raskin, 2006). A chapter on Zoomable User Interfaces (ZUI) was the first to catch my eye. When envisioning my application I had thought to have distinct “layers” of content. If there were sub-layers, you would be able to access them through activation of an item in the current layer. Raskin’s chapter on ZUI’s has led me to reexamine that concept. The idea of smoothly zooming and panning in and out of a relatively static spatial area seems like a good way to help users find information repeatedly. There are a few sections of this and other books that discuss how spatially organized interfaces are more easily navigable over time.

This book also discusses a number of important lessons Raskin has learned in his years as a UI Designer. One of these which was interesting to me was the idea of a modeless interface. Modal interfaces allow multiple applications to have different gestures for similar actions. A gesture is a sequence of human actions that initiate a computer action. In a modeless interface all basic actions are categorized and no matter the task, the user will use the same gesture to accomplish these basic actions. Some of these basic actions are:

  • Elementary Actions:
    • Indicated – pointed at
    • Selected – distinguished from other content
    • Activated – clicked on
    • Modified – used by being:
      • Generated – changed from empty to nonempty
      • Deleted – changed from nonempty to empty
      • Moved – inserted in one location and simultaneously deleted from another
      • Transformed – changed to another data type
      • Copied – sent to or received from an external device or duplicated at a different location locally

By assigning these various actions common gestures the developer can ensure a more seamless experience for the end user. I plan to ensure that my API includes common conventions for these and other actions. The other side of the modeless issue is monotonous interfaces. For an interface to be monotonous all actions must have only one possible gesture to complete them. I am not certain I will be doing this. It might be convenient to have both a mouse and a keyboard shortcut for a given action. For instance, zooming and panning might be accomplished by both keyboard and/or mouse.

I was also pleased to note that Raskin discussed the use of Fitts Law and the Hicks-Hyman Law. These two laws help in assessing  how large interface elements should be as well as how many choices a user should be presented at a given time for a given action. I plan to use the algorithms from these laws as I design my own interface elements as well as publish them in my API.

The next book I read was Chen’s book on Information Visualization (Chen, 2006). The main reason I had read this book was for it’s content on “extracting salient structures” from collections of information. This basically means understanding the content of the data and building relationships between the data items. This lines up with one of the modules for my application. I do not, however, plan to use this information at this point. After reading this and other works on data relationship building, I now plan to use only the most basic meta-data to build relationships between data items, information like filename, publication date, and size among others. I will attempt to integrate a way for developers to extend this functionality using the plugin API. Perhaps I can associate filetypes with different parser plugins which would extract information from the file and then produce some sort of “universal” compatibility coefficient, tags, keywords, etc.

Another UI Designer, Ben Schneiderman, wrote a book entitled Designing the User Interface about…  Interface Design! This book contained a chapter on info visualization and discussed an important visualization mantra: Overview first, zoom and filter, then details on demand (Schneiderman, 1998). This mantra will come in handy especially in designing a ZUI. Applying this mantra to my project goes something like this:

  • Overview first – I will present a zoomed out view showing all the possible data on screen at once. This will make details inaccessible at first, but will allow the user to see all possible general choices.
  • Zoom – The user will then be able to smoothly zoom and pan in on flocks of information items. This zooming might take the form of activating and entering a new area in a drill down action, or literally zooming until other flocks are out of view. This action may be definable and alterable by plugins.
  • Filter – Once the user is zoomed in, they will be able to apply a filter to what data items they are seeing. This filtering action will require user input and may vary depending on the types of data the user is viewing. This is an area where significant control will need to be handed to plugins.
  • Details on demand – After achieving the subset of data they wish to view, the user can then invoke details on an item via some action such as clicking on it, or selecting a few and pressing a details button or key. This too will likely be definable and extendable via plugin.

These ideas define how the person interacts with the visualization, but in order to get a better sense of how the visualization can look, and what ways I can allow plugin writers to customize the visualization, I read a book by Colin Ware titled… wait for it… wait for it… Information Visualization [well, the subtitle is “Perception for Design”] (Ware, 2004)! This book is a veritable treasure trove of visualization ideas. I have been debating with myself about the use of motion in my application but this book cleared it up for me. In a chapter on how to make information pop out, Ware cited a study by Peterson and Dugas (1972) which established that a user has a wider useful field of view (UFOV) when motion is employed than when it isn’t. Specifically, if a user is paying close attention to something on screen, they will only be likely to notice other items changing or dis/appearing within 1 to 4 degree field of view. However, if those other items are moving (differently from the focused item) then the UFOV is closer to 20 degrees off the line of sight. This means that I should definitely offer motion (e.g. jitter, floating around, shaking, pulsing, etc.) as an option in my API.

Following this mention of motion, Ware discusses “Preattentive Processing”. This is essentially the concept that one can use certain visual styles to help a person unconsciously sort and process data before they actually start to pay attention to it. Preattentive processing is the mechanism by which items “pop out” of a large group. This section has a useful listing of features that can be preattentively processed:

  • Form
    • Line orientation, length, width, collinearity
    • Size
    • Curvature
    • Spatial grouping
    • Blur
    • Added marks
    • Numerosity
  • Color
    • Hue
    • Intensity
  • Motion
    • Flicker
    • Direction and velocity of motion
  • Spatial Position
    • 2D Position
    • Stereoscopic depth
    • Convex/concave shape from shading

Each of these visual styles aids the user in unconsciously distinguishing important from bland data. Unfortunately, the more of these styles you use at once in a given collection, the less effective they become. There are however a few combinations of styles which Ware denotes as successful:

  • Spatial grouping on the xy plane in conjunction with color or shape
  • Stereoscopic depth in conjunction with color or movement
  • Convexity or concavity or color in conjunction
  • Motion with shape or color

If a designer uses one of these effective combinations at a time, processing important items can become easier for the user. The trick is in deciding which items are important!

Okay, that’s enough for one post (probably more than enough).

Works Cited

Chen, C. (2006). Information Visualization: Beyond the Horizon. Springer.

Raskin, J. (2000). The Humane Interface: New Directions for Designing Interactive Systems. Addison-Wesley Professional.

Shneiderman, B. (1998). Designing the User Interface: Strategies for Effective Human-Computer Interaction. Addison Wesley.

Ware, C. (2004). Information Visualization, Second Edition: Perception for Design. Morgan Kaufmann.

 

Hello world!

Thus begins my Masters Capstone Project. My name is Matthew. I am a graduate student at the Rochester Institute of Technology and am about to embark on the last leg of my degree. This blog will chronicle my efforts and discoveries on the path to creating a new and hopefully useful system for data/info visualization. My intention is to create an application which others can use to import different kinds of information and then customize a three dimensional visualization of that data to meet their needs. It will not be a complete visualization solution. It will be a good start on the road towards a better solution.