Loading...
「ツール」は右上に移動しました。
利用したサーバー: wtserver1
590いいね 17,356 views回再生

Fine tuning Gemma with LoRA in Google Colab

Fine-tune Gemma models in Keras using LoRA → https://goo.gle/407Kise

Learn how to optimize large language models using LoRA (Low-Rank Adaptation) with a model from Google AI, Gemma. Watch along as Googler Paige Bailey utilizes Google Colab and the Databricks Dolly 15k dataset to demonstrate this fine tuning technique.

Chapters:
0:00 - What is Gemma?
0:44 - What is Low-Rank Adaptation (LoRA)?
1:20 - [Demo] Setting up your AI environment
2:45 - [Demo] Fine tuning with LoRA
3:33 - Conclusion

Watch more Generative AI Experiences for Developers → https://goo.gle/genAI4devs
Subscribe to Google Cloud Tech → https://goo.gle/GoogleCloudTech


Speaker: Paige Bailey
Products Mentioned: Gemma, Google Colab, Gemini

#GoogleCloud #DevelopersAI

コメント