Sunday, January 31, 2016

a future generation of computer peripheral's (input)

How are people going to interact with an artificially generated 3d space? This is of course up to developers of individual pieces of software / hardware to determine, though the models that allow us to traverse more information in shorter amount's of time will no doubt end up popularly employed.  

        Whether it's navigating files in their operating system, or using software, users are going to long for a way to avoid taking off their headset's when it's time to type letters on a keyboard or navigate options by use of a mouse. Use of a mouse requires the presentation of relevant options to reflect the 2d plane of a mouse-pad so what is the point of a 3d environment in this case? It's going to take a revolution in peripheral hardware support for someone using solely an artificial 3d environment to access information as fast as someone using an oldschool pc-desktop setup can..

But that's really just a natural assumption...

 Our culture already knows how to get places fast in a 3d space. By pressing a gas pedal. The organization of information from a programming point of view is going to remain the same - it's the way how we present information in our software that is going to change. We're going to be able to "drive" to it.

  Even the navigation of information in our current digital age  , the effervescent *click* of today's model  ,  is positive input.

Imagine looking up at the sky, constellations of stars are transparently indicated as your every-day software. Through a seamless, sudden, burst of forward momentum - the user takes closer inspection of a spectacular looking, prominent nebula directly above."My Computer" has many small stars and constellations to further inspect. All the while, a mere glance down (and maybe some gentle application of the gas pedal in that downward direction) displays additional options and system information connected to the constellation the user is at.

^ This describes what an early operating system is likely to look like. It leave's options open in terms of inputting information because this will be left to the preference of the user in an early VR age, where not everyone has the same competency/comfort with a keyboard.

Early VR software will likely offer "in-house" options for data input such as type pads or other promptings for positive input. Headaches caused by these early options will create a demand for better interface hardware that makes use of the dexterity of a person's fingers - any sense of intuitive operation of this new hardware will depend on the designer and the user having similar cultural/ generational backgrounds and having similar rational tendencies which all can be very different - what is intuitive for a designer is often not intuitive for intelligent people ; see , the elderly.

Ultimately, (yes, I mean ultimately), efficient navigation of artificial 3d planes will be enabled by a user and a designer both being familiar with known, predictable principals like, looking down towards the black part of the background prompts file properties to be displayed (just an example).

Finally, when a user of a 3d interface can navigate obscure spaces whatever they may resemble, based on known, predictable, potentially arbitrary formats of categorical organization,  man kind will access information like never before.

No comments:

Post a Comment