• RuntheAI
  • Posts
  • Creating Custom Proteins with AI: Nvidia and Evozyne's ProT-VAE Model

Creating Custom Proteins with AI: Nvidia and Evozyne's ProT-VAE Model

Good Morning AI Runners

Here's what we've got for you today:

  • Creating Custom Proteins with AI: Nvidia and Evozyne's ProT-VAE Model

  • Robots with Inner Monologues

Creating Custom Proteins with AI: Nvidia and Evozyne's ProT-VAE Model

Nvidia has teamed up with a pharmaceutical startup called Evozyne to create a new kind of AI model called the Protein Transformer Variational AutoEncoder (or ProT-VAE for short) and it's pretty cool.

Basically, this model can create proteins, which are important building blocks of our bodies and are used in medicine and other industries.

The ProT-VAE can quickly create examples of synthetic protein designs that fit certain parameters, which can speed up the process of developing new medicine.

Nvidia and Evozyne have already used the ProT-VAE to create a new variant of a protein that could potentially cure a congenital disease and another that could consume carbon dioxide and "combat global warming".

The ProT-VAE is built on Nvidia's BioNeMo framework and uses deep learning and generative AI to create new proteins. It's kind of like how we use AI to understand human language, but now we're using it to understand biology. The researchers can describe different parameters for the proteins and the ProT-VAE will create designs suitable for that purpose.

This kind of AI is still pretty new, but it's on the rise and has a lot of potential for medical and biological research. It's like putting biology into information science and turning it into engineering.

It could potentially become a standard approach for creating new medicine and understanding biology. We're truly just scratching the surface of what's possible with this kind of technology.

Could get even more interesting when pharma giants jump in and invest in AI tech.

Robots with Inner Monologues

Have you ever heard of robots that can talk to themselves? Well, Google is working on making that happen.

They have a robot system that uses a special speech model to control itself. This system can talk to itself and make decisions on how to do things better.

This is important because robots need to be able to do many different things and make decisions on their own. They called it "inner monologue" and it helps the robot make better choices.

Think about it, when you're trying to unlock a door, you might say to yourself "I need to unlock the door, I'm trying to take this key and put it in the lock, no wait it doesn't fit, I'll try another one, this one worked, now I can turn the key." The robot is doing the same thing, it's trying to figure out the best way to do something and if it doesn't work it will try something else. Google's team is testing this in simulations and in real life.

The concept of "inner monologue" and the ability for different models to communicate with each other in natural language has the potential to revolutionize the way that different computer systems interact with each other.

Currently, when different systems need to communicate with each other, they often use APIs. Which are like a set of rules that the systems have to follow in order to talk to each other. But with the advancements in natural language processing and the ability for models to communicate with each other in a more human-like way, the need for strict APIs may become less necessary. Instead, imagine a future where different models can understand and respond to each other in natural language, similar to how humans communicate. This would allow for much more seamless and efficient communication between systems, as they would be able to understand and interpret each other's requests in a more intuitive and flexible way. It's an exciting area of research that has the potential to greatly improve the way that computers and machines interact with each other and with us.

Run The AI: PPPs

Pick up (learn):

Two Minute Papers is a cool AI focused channel on YouTube:

Pilot (play):

Restore treasured photographs with this AI tool

Person:

Dan Jurafsky is a professor of Linguistics and Computer Science at Stanford University. He is known for his work in natural language processing, speech recognition, and computational linguistics. He has also written a popular book on the history and science of food and language called "The Language of Food: A Linguist Reads the Menu." He is a Fellow of the Association for Computational Linguistics and the American Association for the Advancement of Science.

Pic of the day:

FTX CEO Sam Bankman-Fried launched a Substack to "address legal defense":

That's it from RunTheAI for today.

THANK YOU FOR READING AND SEE YOU TOMORROW, SUBSCRIBE TO STAY UPDATED!