Published: August 20, 2025
31
139
1.9k

Some interesting Gemma 3 270M fine-tuned use cases 🧵⬇️

Playing and predicting chess

iPhone summarizations

Learning a specific speaking style and persona for an Alien NPC

Generating/reading bedtime stories locally

Personal running coach https://x.com/tarat_211/status...

Here’s how you can build with Gemma 3 270M ↓ https://goo.gle/4luSdHd

@googleaidevs Small enough to run locally with low latency, but still large enough to capture useful abstractions once fine tuned. For startups and edge applications, that’s huge.

@googleaidevs These use cases are awesome! How easy is it to fine-tune Gemma 3 270M for a new idea? Would love to hear your advice!

@googleaidevs The shown use cases are not that great , the need of hour is , can we train this on lets say frontend programming languages so that we can directly use it on our system with CPU.

@googleaidevs Fine-tuning a model for 270M use cases? That's cute. I've seen Drift do that in its sleep while I'm playing Overwatch

@googleaidevs I am still looking for a business or a more wider use case here Can someone help me what am i missing here

@googleaidevs another use case is as "personal offline dictionary" I think I need to finetune gemma 3 for that 🤔

@googleaidevs Need to test some of these out myself 👌

@googleaidevs I’m going to fine-tune as personal pocket stylist

@googleaidevs Fine-tuning offers exciting possibilities. What specific applications are most promising for wider adoption?

@googleaidevs Useless it is

Image in tweet by Google AI Developers

@googleaidevs Hey I wanted to know what amount of data is recommended for fine-tuning. I am trying to do this for extracting expenses and some metadata from a transcription.

@googleaidevs Can anyone guide me on how to run it locally on mobile?

@googleaidevs Might be my model of choice for architecture experiments, small enough that most people could do significant continued pretraining on it.

@googleaidevs How about tool calling? I've read at least Gemma 3 12B is recommended for reliable tool calling, but is it reasonable to expect good performance with 270M for a small number of functions after FT using tool traces?

@googleaidevs Has anyone used fine tuned llm in production ? how to do Inference Scaling without massive cost

@googleaidevs The problem is , ONNX support lacks right now. Its very complex. Is there easy way? I have struggled with various workarounds.

@googleaidevs And what with gemma 3n, it is touted as edge de ice model but its 11 GB base model and even with quantization , the size still remains about 2.5 Gb. That is difficult to assume that it can run easily on mobiles.

@googleaidevs "AutoModelForSequenceClassification" is not defined for Gemma3-270m. I had to write it myself. I had to use "AutoModelForCausalLM" and a custom top layer with LoRA. Could you add transformers properly so I can finetune the classification? :)

@googleaidevs Can anyone guide me to how to run this model locally on mobile

@googleaidevs So, how you do the fine tune?

@googleaidevs Najdorf all the way 💪🏻

@googleaidevs I'm doing a Fine-Tuning for natural reasoning, so far the results are very good 👀 <think>Train one model to simply reason, and another to take the reasoning and respond</think> <answer>This will be great</answer>

@googleaidevs 🤦‍♂️🤦‍♂️🤦‍♂️

@googleaidevs fine-tuning expands capabilities but also raises the stakes for ethical use and misinformation risk

Share this thread

Read on Twitter

View original thread

Navigate thread

1/31