Defining meaning

The more pressing, if more complex, task of our digital age, then, lies not in figuring out what comes after the yottabyte, but in cultivating contact with an increasingly technologically formed world.  In order to understand how our lives are already deeply formed by technology, we need to consider information not only in the abstract terms of terrabytes and zettabytes, but also in more cultural terms. How do the technologies that humans form to engage the world come in turn to form us? What do these technologies that are of our own making and irreducible elements of our own being do to us? The analytical task lies in identifying and embracing forms of human agency particular to our digital age, without reducing technology to a mere mechanical extension of the human, to a mere tool. In short, asking whether Google makes us stupid, as some cultural critics recently have, is the wrong question. It assumes sharp distinctions between humans and technology that are no longer, if they ever were, tenable.

If you think about the concept of data, the value lies in the meaning of the data, not the data itself.   Chad Wellmon explains in his essay entitled “Why Google Isn’t Making Us Stupid…or Smart” that we need to frame the data and craft the human experience around the data.  By creating this crafted framework, only then can we derive meaning.

The concept of crafting experience is at the heart of designing usable software.  All software deals in data, but it’s how you are able to use and understand the data that makes the software effective.  There are a variety of concepts around usability, from efficiencies to visualizations to pure functionality, but the cohesive framework problem lies at the core of each usability specialization.

One potential problem from the omnipresent issue of data overload is decision fatigue.  Jonn Tierney describes decision fatigue thusly:

The more choices you make throughout the day, the harder each one becomes for your brain, and eventually it looks for shortcuts, usually in either of two very different ways. One shortcut is to become reckless: to act impulsively instead of expending the energy to first think through the consequences. (Sure, tweet that photo! What could go wrong?) The other shortcut is the ultimate energy saver: do nothing. Instead of agonizing over decisions, avoid any choice. Ducking a decision often creates bigger problems in the long run, but for the moment, it eases the mental strain.

Any software that can simplify or streamline this decision making process around pertinent data is helpful; anything else is not. As Wellmon writes, this issue is not new.  We’ve been presented with information overload with each new technological advent.  And, the chronicle of Theuth and Thamus in Plato’s Phaedrus captures the essence of the issue…that we may succumb to mistaking information for wisdom and to make of people “…they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome company, having the show of wisdom without the reality.

Post to Twitter Post to Facebook Post to LinkedIn Post to Google Buzz Post to Delicious Post to Digg Post to Reddit Post to StumbleUpon Post to Pinterest

Kinect for Windows SDK 1.5

Yesterday in the Kinect for Windows Blog, we found that some new features will be released in an update for the SDK. In addition to increased language support and “seated mode,” we’ll be getting record, playback and debug capabilities.  This just brings questions to mind:

  • Will this functionality be better or worse than the KinectRecorder?
  • Will the SDK have discrete saved parts (image, depth, skeleton) for inspection?
  • Will the SDK have independent component playback capability, since one component of sample code so far has skeletal joint rendering reliant on depth information?  Right now, in the KinectSkeletonViewer.DrawSkeleton method, you return a point object by invoking the DepthImageFrame.MapFromSkeleton method:
private Point GetPosition2DLocation(DepthImageFrame depthFrame, SkeletonPoint skeletonPoint)
    DepthImagePoint depthPoint = depthFrame.MapFromSkeletonPoint(skeletonPoint);
    switch (ImageType)
       case ImageType.Color:
          ColorImagePoint colorPoint = depthFrame.MapToColorImagePoint(depthPoint.X, depthPoint.Y, this.Kinect.ColorStream.Format);
          // map back to skeleton.Width & skeleton.Height
          return new Point(
             (int)(this.RenderSize.Width * colorPoint.X / this.Kinect.ColorStream.FrameWidth),
             (int)(this.RenderSize.Height * colorPoint.Y / this.Kinect.ColorStream.FrameHeight));
       case ImageType.Depth:
          return new Point(
             (int)(this.RenderSize.Width * depthPoint.X / depthFrame.Width),
             (int)(this.RenderSize.Height * depthPoint.Y / depthFrame.Height));
             throw new ArgumentOutOfRangeException("ImageType was a not expected value: " + ImageType.ToString());

In the end, I assume that each piece of the overall saved information (Image, Depth, Skeleton) will still be reliant on at lease one of the other pieces.

Post to Twitter Post to Facebook Post to LinkedIn Post to Google Buzz Post to Delicious Post to Digg Post to Reddit Post to StumbleUpon Post to Pinterest

New, New Things

I am glad the author of this article gives proper attribution to Michael Lewis:

It’s true that the next, next thing will incorporate multi-device, multi-user propositions, with a great nod to the NUI.  We are, in fact, due for the next thing at this point.  Information or data must be accessible anywhere, with fluid ease and a more comprehensible accessibility at that.  The level of abstraction that information technology has compounded upon the process of efficiency is one that’s difficult to grasp.  To be able to master simply the English language and physically transfer that to the sub-syntax of computers is difficult enough for the one-on-one interaction of the human-computer interaction paradigm, let alone many-to-many interactions that will follow.

In the last post, we talked about the commoditization of information technology tools and techniques.  To a degree, this commoditization can help the end user to assimilate these abstract interfaces.  But we still live in a world where a total of 33% of the population uses the internet (according to Nielsen Online, the ITU, and a variety of regulatory bodies).  To better enable the world population, this next, next thing needs to embrace this underserved portion of the population through cost and general accessibility.  We often take for granted, in the United States, the ease of accessibility and the dynamics of cost, but unfortunately, it’s not a widespread global situation.

Post to Twitter Post to Facebook Post to LinkedIn Post to Google Buzz Post to Delicious Post to Digg Post to Reddit Post to StumbleUpon Post to Pinterest

The Role of CIO and IT

This is a little of topic, but coming from an IT management background, I think it’s relevant:

IT Consumerization, the Cloud, and the Alleged Death of the CIO

I agree there will always be a place for the CIO, and in turn, the IT department, even while tech gets consumerized.  One thing sort of missing, or rather given some short shrift, from the link above is an explanation of policy and security.  Sure, integrations will always carry with them increased flexibility in terms of handling certain security protocols and policy.  And, as time wears on with BYOD (Bring Your Own Devices, for the uninitiated, and, by extension, bring your own apps), security policy will meld and simplify around a multi device universe.  In short, security policy will itself become a bit more commoditized and consumerized.  But right now, it’s still the role of the CIO, and by extension, the IT department, to implement, maintain and enforce this policy, which I would rank as quantifiably the largest area of risk of all.

Funny how, to this end, open source apps have finally pushed some forms of pay model out and heralded not the era of open source, but the era of integrations and layered policy!

In an interesting segue, fabled security expert Bruce Schneier talks to TechCrunch about the idea of security and the relationship economy.  The consumerization of IT and related devices mirrors the phenomenon of scalable trust on the internet.  Trust is now a delegated resource, something we as consumer will likely see more of (Scheier’s example is eBay delegating trust for a slew of consumer merchants).  Trust, and by extension security, as a commodity, can be outsourced only to the extent that organizations are willing to comply to certain standards of trust and security.  For instance, there are trust standards for organizations providing IT services to government regulated agencies.  The standards are usually very comprehensive and they are well documented.  How much can organizations offload the security and trust model, thus offloading risk, to outsourced entities remains to be seen, but it introduces even more opportunity for IT in the enterprise, for the foreseeable future.

Post to Twitter Post to Facebook Post to LinkedIn Post to Google Buzz Post to Delicious Post to Digg Post to Reddit Post to StumbleUpon Post to Pinterest

Skeleton Serialization

The serialization of the skeleton was a new development with the release of the official Kinect for Windows SDK v1.  While the format is not ideal (text as opposed to XML, due to inherent limitations in the skeleton-joint object model), neither is the size, which is something we’re working on.

If you are producing up to 30 frames per second, you are going to create up to 30 binary serialized object, or recordsets, of the skeleton information.  Saving these out to text right now (as a flat file) produced a very predictable 2.2 KB file.  If you are recording this activity over a significant period of time…well, I will let you do the math.  Thus, the baseline serialization is not optimal for this particular application.

Without losing any of the good stuff that is included in this serialized object for playback and analysis, there are a host of other options to chose from.  The current data set would rate equivalent to an old ISDN line @ approximately 64 kb/second.  And, remember, this is NOT including the heavier information, like color and depth images to video.  So, this is certainly something to consider when capturing this information.

Post to Twitter Post to Facebook Post to LinkedIn Post to Google Buzz Post to Delicious Post to Digg Post to Reddit Post to StumbleUpon Post to Pinterest

Ftiness as a game…

…makes sense, since most training activities are centered around a game of one sort or another.  Engadget reported today that Nike is extending the Nike+ brand to gamify training, in this case: basketball.  Several tangible components are measured during training, but some abstract concepts (hustle!), as well as “for-fun” activities (dunking!!) are shown as well.  These applications are tied to the sale of shoes, which is a great cross promotional endeavor (think: Microsoft selling PC’s to extend sales of Windows, Office, etc.).

I think that gamification has a strong future for enticing adoption.  There are pundits out there who are not as enamored with the concept.  But contrary opinion doesn’t address the fact that gamification can increase adoption and, through providing palpable, immediate achievement to the task at hand.  People seem to thrive on achievement, and even contemporary workplace project management ideologies champion the approach of manageable slices of work with clear goals and a consistent, measurable definition of completion.  Why, then, can’t everyday activities be like that?

Measurable results certainly help in achieving a goal, since they drive the user to push towards a predefined criteria.  While it is true that a fair degree of buzz and marketing traction has made the concept of gamification in any endeavor more widespread and noteworthy, the concept is effective.  Tom Robbins said, in Still Life With a Woodpecker:

We are our own dragons as well as our own heroes, and we have to rescue ourselves from ourselves.

Isn’t gamification just one way to do just that?

Post to Twitter Post to Facebook Post to LinkedIn Post to Google Buzz Post to Delicious Post to Digg Post to Reddit Post to StumbleUpon Post to Pinterest

Baseball Biomechanics and Kinematics

As Jonah Keri reported yesterday in Grantland, the Houston Astros made an interesting hire or two in the recent past.  One of these hires, Sig Mejdal, is the most compelling to me:

“All the pieces of information that you can imagine that we evaluate on an everyday basis to make decisions, we’re going to do that in a systematic way,” said new Astros general manager Jeff Luhnow, who brought Mejdal with him from the St. Louis Cardinals, where Luhnow ran the scouting department and Mejdal served as director of amateur draft analytics. “Utilizing information technology capabilities and pretty sophisticated tools and techniques.”

One such piece of the puzzle, as Keri reported, is the use of biomechanical information, something of a hobby/obsession of Mejdal’s.  The idea of this data used to correlate probabilities of injury and performance is an extension of the idea we are pushing across in analytical performance analysis.  We can help fill this space by (1) tracking tendency against the correlation and causation fo injury automatically and (2) by taking said tendency analytics and turning it into recommendations for preventative measures.

Post to Twitter Post to Facebook Post to LinkedIn Post to Google Buzz Post to Delicious Post to Digg Post to Reddit Post to StumbleUpon Post to Pinterest

Out in the Wild

We took the application out into the wild for the first time. Previous beta tests had been conducted in closed quarters. It was time, with the porting of the codebase over to the new commercial Kinect for Windows SDK, to get out and about. What did we learn today with beta in the field? A bunch of things:

  1. Those minimum hardware requirements are no joke. Testing on a highly portable, albeit somewhat hardware-deficientnetbook (the Asus EEE PC 1201N), was a challenge. Skeletal tracking was the big hog, lagging well behind both color and depth streams.
  2. Sensor placement is a big deal. Sure, in closed quarters, you can easily predict where things can and should be placed for optimum tracking, but out in the wild you are constantly confronted with the challenge of making the sensor placement work. Is electricity available? Is there a good camera position? Not only that, but while the skeletal tracking is good, sometimes it gets confused. A higher sensor placement may serve the tests better, and we will try it in the future (this time we were ~18″ off the ground).
  3. The sensor is light. This will be a consideration out in the field. Can it get damaged? Will it withstand any bumps or bruises?
  4. Encoded files are BIG. Right now, we’re working with video, and video files are large. It’s a common problem, but one that is now at the forefront of our challenges.
  5. The SDK works with the XBOX sensors.  While the liense doesn’t explicitely state that the XBOX sensors won’t work, if you install the SDK on the target client and run your application using an XBOX sensor, applications work.  While “…(I) agree that end users of Kinect for Windows Applications are not licensed to use Kinect for Xbox 360 sensors in connection with such Kinect for Windows Applications, and that you and your distributors will not directly or indirectly assist, encourage or enable Kinect for Windows Application end users to do so” (per the commercial license Restricted Use with the Kinect for Xbox 360 sensor clause), it was interesting to test this functionality out.
  6. People are interested. Curious onlookers were buzzing about what we were doing, which is a great thing.

More public rounds to come, then some online demos thereafter. Good times!

Post to Twitter Post to Facebook Post to LinkedIn Post to Google Buzz Post to Delicious Post to Digg Post to Reddit Post to StumbleUpon Post to Pinterest