Go Back   Wiki NewForum | Latest Entertainment News > Tech Gadgets Forum

Facebook’s RegNet AI model tops Google’s EfficientNet, runs 5 times faster on GPUs

Views: 233  
Thread Tools Rate Thread Display Modes
Old 04-12-2020, 10:52 AM
welcomewiki welcomewiki is online now
Join Date: Dec 2008
Location: India
Posts: 83,545
Default Facebook’s RegNet AI model tops Google’s EfficientNet, runs 5 times faster on GPUs

Facebook recently made progress in the GPU department. A team from Facebook AI Research (FAIR) recently developed a new low-dimensional design space. Named ‘RegNet’ the new design outperforms traditional available models including ones from Google. Further, it runs five times faster on GPUs.

RegNet produces simple, fast and versatile networks. Moreover, in certain experiments, it even outperformed Google’s SOTA EfficientNet models, said researchers in a paper titled ‘Designing Network Design Spaces. The same can also be found published on pre-print repository ArXiv. The researchers aimed for “interpretability and to discover general design principles that describe networks that are simple, work well, and generalize across settings”.

Watch: 5 Tips to Save Mobile Data

The Facebook AI team also conducted controlled comparisons with Google’s EfficientNet with no training-time enhancements, under the same training setup. Introduced in 2019, Google’s EfficientNet design uses a combination of NAS and model scaling rules, representing the current SOTA. With similar training settings and Flops, RegNet models outperformed EfficientNet models also being up to 5 times faster on GPUs.

Rather than designing and developing individual networks, the FAIR team focused on designing actual network design spaces. These comprise huge, possibly infinite populations of model architectures. Design space quality is analyzed using error empirical distribution function (EDF).

Further analyzing RegNet’s design space also gave researchers other unexpected insights into its network design. For instance, they noticed that the depth of the best models is stable across regimes with an optimal depth of 20 blocks (60 layers).

“While it is common to see modern mobile networks employ inverted bottlenecks, researchers noticed that using inverted bottlenecks degrades performance. The best models do not use either a bottleneck or an inverted bottleneck, said the paper.

Facebook’s AI research team recently developed a tool that tracks the facial recognition system to wrongly identify people in video footage. The “de-identification” system, which also works in live videos, uses machine learning to change key facial features of a subject in real-time. FAIR is advancing the state-of-the-art in artificial intelligence through fundamental and applied research in open collaboration with the community.

Also Read

Facebook launches Quiet Mode to curb social media addiction; here’s when you will get it

The history behind FAIR

The social networking giant created the Facebook AI Research (FAIR) group in 2014 to advance the state of the art of AI through open research for the benefit of all. Since then, FAIR has grown into an international research organization with labs in Menlo Park, New York, Paris, Montreal, Tel Aviv, Seattle, Pittsburgh, and London.

(With inputs from IANS)

Reply With Quote

Latest News topics in Tech Gadgets Forum

Thread Tools
Display Modes Rate This Thread
Rate This Thread:

Powered by vBulletin® Version 3.8.10
Copyright ©2000 - 2021, vBulletin Solutions, Inc.