• RuntheAI
  • Posts
  • 2022's Top ML Innovations: Insights from Google Research

2022's Top ML Innovations: Insights from Google Research

Good Morning AI Runners

Here's what we've got for you today:

  • 2022's Top ML Innovations: Insights from Google Research

  • Big players vs. little guys: small teams and individuals can still make a big impact in AI

2022's Top Machine Learning Innovations: Insights from Google Research

Google Research has published a blog post that highlights the major developments in machine learning from the past year, with a focus on the work done by Google themselves (a bit biased).

It's a useful resource for keeping up-to-date on the latest advancements in the field, especially for those who may not have the time to read every individual paper.

Just before you dive into the blog above, here's an example provided in the blog that gives us insight on how some of these models are trained:

This picture is a demonstration of FindIt which is a comprehensive model that can handle three different tasks: understanding referring expressions, localizing text-based information, and identifying objects. It is able to perform well even when tested on object types and classes that it was not trained on, for example, it can accurately locate a desk when asked to "Find the desk". The results of MattNet are also provided for comparison.

It's really cool (and important) to understand what goes behind these new ML models that are being released every few weeks now.

ok ok now go and read through the blog to learn something new today.

Big Players vs. Little Guys: small teams and individuals can still make a big impact in AI 

Two years ago, if you wanted to get your foot in the door of AI research, all you really needed was a couple of GPUs to train or fine-tune your models. But things have changed.

Nowadays, the entry level for AI research is being able to regularly train a 50-70 billion parameter model on hundreds of billions of tokens. In other words, the bar has been raised. It's becoming increasingly difficult to make significant advancements in AI without access to some serious compute power and resources. But, as always, there are always new opportunities to be found.

We believe that the future of AI lies in optimization and open-source community implementation. This means that instead of relying on a single, powerful machine or organization to drive AI advancements, we will see more collaboration and sharing of resources within communities of researchers and developers.

This shift towards community-driven AI research has already begun to take shape. Open-source libraries and frameworks have made it easier for researchers and developers to share their work and collaborate on projects.

But right now, it's pretty much a two-horse race between the big tech companies like Google and OpenAI. They're the ones who have the deep pockets to hire the best engineers and pay for the massive costs of training AI models.

But here's the thing, Moore's Law is still in play. And that means that technology is continuing to get better and cheaper. So, it's only a matter of time before the little guys start to catch up. Just imagine a world where small companies and even individuals have access to the same level of computational power as the big players. Can you even imagine the kind of creativity that would bring to the field of AI research?

So, if you're interested in getting involved in AI research, don't be discouraged by the high bar that's been set. Instead, look for ways to collaborate and share resources with others in the community.

Pic of the day:

everyone's AI nightmare:

That's it from RunTheAI for today.

THANK YOU FOR READING AND SEE YOU TOMORROW, SUBSCRIBE TO STAY UPDATED!