AI and computer vision drive cars, robots and smart homes forward

Liran Bar, Director of CEVA Imaging and Vision DSP Core Product Line

Every year we see some amazing technologies at CES, including automotive, intelligent robots, drones, AR/VR, innovations in smart appliances and many other technologies. The evolution from expensive futuristic toys to practical and useful equipment is exciting, and this year has made significant progress in this direction. Of course, there are some exaggerations, just a show of gadgets. Let's take a look at which consumer devices that use artificial intelligence and computer vision will become mainstream.

Robots and virtual assistants rely on camera eyes and built-in AI intelligence to do more work

Since the launch of Amazon Echo for the first time in 2014, the voice interface has been widely adopted in the past few years. It is clear this year that in order to reach a higher level, visual and artificial intelligence technologies must be employed on the terminal equipment. There are countless robots with cameras in this exhibition, some of which are outstanding products.

Omron's Forpheus uses AI technology to play table tennis

The robot company Omron demonstrated its technology in a lively and interesting way, a robotic table tennis master called Forpheus. The robot uses two cameras to track the position and speed of the ball, using a patented predictive model to calculate the ball's trajectory to maintain a back-and-forth confrontation with human opponents. There is also an additional camera that tracks the facial expressions of human players and determines if they are enjoying the game to make sure it is a fun game. While this does not mean it is a commercial product, it shows that artificial intelligence, sensing and advanced robotics can be applied to a variety of industrial and consumer functions.

Not all presentations are as smooth as Forpheus' table tennis skills. LG's smart home robot CLOi has some embarrassing moments, such as the robot does not respond to voice commands. The similar appearance of Jibo demonstrates its social skills, including facial recognition. The device has been on sale since October last year, and it is more socially oriented and personalized to interact with users than leading smart speakers.

SLAMtec also showcases robots that feature Slam positioning and navigation solutions, with Zeus being a versatile robotic platform. UbTech RoboTIcs last year released the Alexa-powered humanoid robot Lynx, which launched a two-legged robot that can climb stairs and play football this year.

Sony's robotic dog Aibo, launched in the late 1990s, returns to people's attention with a new, more advanced version. It contains two cameras and multiple sensors to identify the owner and react to touch and sound.

Another innovative product related to pets is the interactive Wi-Fi pet camera Petcube, which helps users check the status of their pets remotely. One of the pet cameras of the model even allows you to prepare a meal for your pet by shaking your finger.

When does the virtual reality application take off?

As for the innovation in the virtual reality market, we have seen steady growth, but it has not exploded as expected. This is mainly due to some difficult challenges such as limited computing resources, power consumption, inside-out tracking and content quality limitations.

At 2018 CES, HTC released HTC Vive Pro, which supports high resolution and low latency, and more importantly, it can stream content directly to the helmet without the need for cables like other devices. It looks bigger than HTC Vive and is aimed at high-end professional users because of its high price.

A new application of virtual reality technology is Google VR180, which may become a mainstream consumer product. It uses an innovative approach to capturing a 3D image using binocular stereo camera technology. It uses a 180-degree shooting angle instead of 360 degrees that are inconvenient to view through normal viewing angles. The two products dedicated to this new format are Lenovo's Mirage camera and the small ant Horizon VR180 camera. Users can watch 3D photos through the Google Daydream VR helmet or watch 2D photos on any screen.

Driverless car under the spotlight

Driverless cars have become one of the biggest attractions at CES in the past few years. This year, automotive experts have seen driverless cars to be a reality, and instead start looking for the necessary services and applications to meet the new demands that humans don't have when driving. For example, Ford's CEO Jim Hackett, in his keynote speech, described the entire autonomous vehicle ecosystem as "Life Street." The Toyota e-Palette concept car also conveyed a similar message depicting the vehicle's versatile and modular configuration from mobile casinos and restaurants to shared services and cargo transportation without a driver.

In the autonomous aviation sector, Bell Helicopter demonstrated how they can achieve unmanned flight in a taxi-like electric helicopter. Visit the Bell booth to try this concept with a VR helmet.

These examples prove that everyone clearly understands that the revolution of driverless cars is happening. The only question is what will our city look like once it is realized?

Intelligence is moving to the terminal

The explosive growth of artificial intelligence in the past few years is a direct result of the Internet. In the past, personal computers and handheld devices were not strong enough to support deep learning, so big companies like Google and Amazon used huge server centers to process data in the cloud. The advantage of this approach is that it supports almost unlimited computing power without the need to consider the processor of a particular device. But there are also many disadvantages. The first is the delay in data transmission, which will change with network coverage, not to mention the absence of a network. More important is the shortcomings of cloud processing—privacy and security. When dealing with sensitive information, it's best to stay on the device instead of sending it to a weak external.

These reasons clearly show that using the cloud to handle deep learning is just a temporary solution. Once the embedded platform can provide enough performance to support AI processing, it will be executed on the terminal device. You may want to know when the embedded platform is strong enough, and the answer is that they have met the needs. The new flagship phone, like the embedded neural engine on the iPhone X, can recognize faces locally to unlock the phone and send no messages to the cloud. Many other artificial intelligence features can also be implemented on the device, through powerful and efficient DSPs and dedicated deep learning engines based on vector processors. Advanced processing and power-saving technologies make these systems consume less power than GPUs and other processors (for remote servers), so even small, battery-powered devices can use artificial intelligence processors such as NeuPro without relying on them. cloud.

LCD Tonch Screen For Iphone 11

Lcd Tonch Screen For Iphone 11,Lcd Touch Screen For Iphone X11Promax,Lcd Display For Iphone X11Promax,Mobile Lcd For Iphone X11Promax

Shenzhen Xiangying touch photoelectric co., ltd. , https://www.starstp.com