Computers, How To

What would be the human interface with Computer in Future

What would be the human interface with Computer in Future

What would be the human interface with Computer in FutureWhat would be the human interface with Computer in Future – With the passage of time Computers are more powerful today than it was 50 years ago, but the human interaction with it stuck to keyboard, mouse and touchscreen. Today people use mouse, keyboard and touch screen to control the processing of computer applications. Touchscreens are introduced as new technique for computer navigation.

At the time of introduction of touch screen, a single point of contact was there but today multitouch allow user to zoom photo with the help of two fingers simultaneously. Similarly, few predefined gestures can execute specific commands on these technical devices. But scientists and engineers are on the way to develop new interface mechanism discussed below.

Since University of Tokyo had developed a Projector known as Khronos Projector which combines a touch interface with new methods of navigating prerecorded videos. The system consists of a projector and a camera mounted behind the flexible screen. The projector will display images on the screen while camera detects changes in the screen’s tension.

While user can push screen to affect prerecorded video like speeding a section of the video up or slowing it down leaving all other unchanged. So here a user can interact with the screen to manipulate the images/ videos. Microsoft Kinect peripheral for the Xbox 360 lets you play video games without a physical controller.

Today Engineers are working towards the way to control computer with sound. They are designing a technique by which computer can distinguish between words and commands. Like voice recognition system which allow translation of voice messages into text with complete accuracy on smartphone. Others are developing a hands-free interface mechanics.

Oblong Industries had created a new interface called g-speak interface. The interface uses a collection of sensors and cameras to interpret a user’s movements and translate them into computer commands.

Next interface would be use of brains which has the capacity to communicate through small electrical signals. The nerve cells called neuron present in the brain initiate to do such task.

If possible, we find a way to map these electrical signals, we can create a device which detects, interprets and translate to control external linked devices. We can call these interface as brain computer interface.

Doresh Chandra

Leave a Reply