Advent of Voice Recognition System

Decades ago GUI was developed for the ease of communication between humans and computers. This development enabled humans to interact with computer through typing on keyboard, moving the pointer of mouse here and there all over the screen, clicking and dragging of items etc. So this development of Microsoft revolutionized the Computational Engineering until now.

Firstly, humans used to learn machines. But now, the scenario is revered. Humans teach machines about different subjects that humans come across in their daily life. Once machines are taught, they are used to get the best possible solution to daily problems. No matter how complex the problem is, if the machine knows what a human knows, then machines are a quick option to go for.

Now interaction with computer through keyboards and mouse are considered as classical methods. With the advent of voice recognition system, human are interacting with machines by just talking to them. The machines listen to the words, understand them and then use those acoustic commands to perform different functions.

Google has launched Google how, Microsoft has launched Microsoft Cortana and Apple has launched  Apple Siri in the Computer market to compete with each other in the very same technology. Tech giants feel that the future belongs to this technology as if someone use this technology, he / she won’t be able to go back to the conventional machine interaction methods as they are very cumbersome as compared to the voice recognition system.

With voice recognition system, you can give different operating commands to your machines, you need not to worry about keyboards and mouse because your machines are now capable of typing whatever you narrate to them.

Voice recognition system has also enable some physically challenged people to use machines like any other person.

A new way to load webpages 34 percent faster in any browser

MIT, renowned for their computer science research has come up with another spectacular way of making user-experience better. Scientists have come up with a better way to ensure webpages load upto 34 percent faster in any browser. Companies like Amazon, think every single second extra it takes to load webpages, it reduces their yearly profits by one percentage point too.

Polaris, the new system designed by MIT researcher can ensure load times are cut by overlapping download of different parts on a webpage. In normal circumstances, our browsers fetch pictures and videos as per the HTML code the webpage has. It then evaluates every object and places them as per the scripts. Some objects here can be dependent on other objects, making it hard for those dependent to stay unloaded unless their dependencies are read. Polaris tracks the relationships between all objects and then graphs it for our browsers to understand the dependencies to load objects better. It helps the browser follow a roadmap, which is the shortest and quickest way of loading a page. Ravi Netravali, a Ph.D. student, said that it could take upto 100 milliseconds for the browser to fetch data across networks. As pages get complex, these trips increase too, increasing the load time. After extensive testing of Polaris, researchers found that it help load pages upto 34 percent faster than browsers without it.

It can be used on any browser and can help pages like Amazon, Walmart, Netflix, Google or other search engines webpages faster, while fetching data in the lowest number of trips through a defined path, which is tailored and optimized for that specific page.

Facebook helps Competitors build Hyperscale Data Centers together

Facebook processes more than 6 billion photos on a daily basis, and with the revolution it brought, its competitors require the same computing power to process their data too. Technology comes at a cost and not every company has deep pockets to help develop sophisticated data centers for cloud storage. Hence, Facebook has provided its competitors a way to stay competitive by working on shared Hyperscale data centers that are developed by Facebook and its competitors together.

The Open Compute projects helps achieve that. It has relied upon the open hardware innovation trend. Big wealthy companies such as the Facebook, can buy land, build data centers, and fill them with high end computing and networking equipment. It helps bring more efficiency, hyperactive computing, and ability to perform tasks in minutes for all. Google in this drive has mostly supported Facebook. Although they both have been fighting for online advertisers and their cash, at the end they sit on the same table that requires fast data processing at convenient costs. The rivalry is very high in the financial services sector too. Hyperscale web projects are common there, as Fidelity, Bank of America, Goldman Sachs and other international banks fight for customers at the front end, generating key revenues, but at the back end share the same OCP systems to help stay efficient and competitive on costs.

Another area where OCP projects are common include the telecom industry. Recently, T-Mobile, Verizon, and AT&T joined to have common data centers. Many telecom giants believe it would be impossible to develop infrastructure without the help of their competitors. Hence, they can collaborate towards a single vendor, making it easy to manage an OCP that allows feasible costs and investments over the long run. Mahmoud El-Assir, Senior Vice President at Verizon said that traditionally, software, hardware, and networking were different tasks, but now they are collectively one. If we change the software, we change the hardware too. Hence, the costs involved cannot be stacked against individual effort and requires the whole industry to move together. OCP helps bring cost efficiency, while being legally bound towards their own automated systems. Facebook is installing up to 100GB data transfer protocols and wish to increase it to 400GB. That is a lot of speed to transfer at any single time.

Augmented Reality

Augmented reality is integration of real world environment with digital world in real time where various elements of real world are augmented using different sensory inputs such as video, graphics, GPS etc. Augmented Reality bridges the gap between real and virtual by enhancing the experience of what we see, hear, smell and feel.

Augmented reality is going to change the way we see our world. Google Glass is one example of how the augmented reality is going to work. In the year 2009, during a TED Talk in India Pranav Mistry and Pattie Maes presented their amazing augmented-reality work being carried out in MIT Media Lab. Using their Sixth Sense device which uses a combination of a camera, small projector, color markers etc. any surface can be instantaneously converted in to an interactive screen be it a wall or your palm. Their device uses the camera to capture the physical world which then seeds the data into the smartphone which in turn using the GPS coordinates interprets the various objects being captured by the camera and then displays the information on any surface using a small projector. With such a breakthrough technology only imagination is the limit.

Although this technology is still in its development phase, you can experience a small version of augmented reality by using various apps on your smartphones. The apps using which we can translate a menu from one language to other just by focusing our camera on it is a classic example of the app using augmented reality. An extensive use of this technology is being done in video games and military. A lot of games are present in the market which can mimic the steps done by you such as, you can play bowling and the ball gets thrown in the game just by a gesture of throwing it in front of the camera. In military this technique is being used in developing the sophisticated instruments which integrate the helicopter pilot and the canon present in it as one entity. The canon points towards the direction in which the pilot is looking, so no time is wasted in choosing the direction and aiming the target.

Although the augmented reality has made a lot of progress, there is a long way to go for its perfection. For example, the applications which require GPS can only be accurate up to a limit of 30 feet. So there is a need of advancement in technology. Also as the things are getting connected to each other via internet, privacy and cyber security are becoming more vulnerable to threats.

Optical Computers

Computing has over the years actually been done through a wide range of mediums. Only now in the twentieth and twenty-first century, we have started specifying computers as just machines that runs on electricity. Now whenever someone uses the word computer, we automatically think of electronic computers composed of resistors and transistors. But actually when we delve deeper into the history of computers we find that actually computers were regarded as devices that helped in performing mathematical tasks. The famous abacus device used as a calculating tool was also in fact a computer.

Now, in the modern era the use of such primitive device is non-existent. But we are slowly reaching the limit of traditional computers and so research is being carried out to try new mediums for computing. Research into types of technology like optical computers, DNA computers, neural computers etc. are being carried out. These technologies will pave way for new different types of achievements and will enable us to do things not possible by traditional computers.

Optical computing is a type of computing which uses photons produced by diodes or lasers for computing. Photons will provide higher bandwidth as compared to the electrons used in traditional computers. Research is being carried out to replace all the electronic components of current computers with their optical equivalents. Modern computers are based on transistors which is their main building block. An optical transistor is thus required to make an optical computer. This is achieved by using non-linear refractive index materials. Such devices can be used to make optical logic gates which in turn can be used to make computer’s CPU.

However, there is some disagreement regarding the future of optical computers. Researchers are concerned that optical computers won’t be able to compete in speed, power consumption, cost and size to the traditional semiconductor-based electronic computers.