Published: October 28, 2025
24
88
909

Please steal my AI research ideas. This is a list of research questions and concrete experiments I would love to see done, but don't have bandwidth to get to. If you are looking to break into AI research (e.g. as an undergraduate, or a software engineer in industry), these are

ccing some folks whose community may be interested. @srush_nlp @ShamKakade6 @karpathy @natolambert @menhguin @willccbb

@tanishqkumar07 I have a doc like that but it's not public 😂 Nice!

@BrandoHablando Haha these languished in my Google docs for too long, I figured I wouldn't be able to get to them anytime soon but others might

@tanishqkumar07 "Please steal my AI generated ideas" Fixed it for you

@tanishqkumar07 The MLP in-context learning finding is fascinating - it suggests the attention mechanism might be more about efficiency than capability. I'd hypothesize that MLPs achieve ICL through higher-order feature interactions that effectively "emulate" attention patterns in weight space.

@tanishqkumar07 I'll send you a DM because I'm working on one of these right now and would love to hear your thoughts!

@tanishqkumar07 amazing problem list sir, I'd like to try the synthetic data generation problem using semantic permutations. I just want to know if you have any prior works to this or where did you get the idea from?

Image in tweet by Tanishq Kumar

@tanishqkumar07 Can I add more?

@tanishqkumar07 Your thread is creating a buzz! #TopUnroll https://threadreaderapp.com/th... 🙏🏼@Thrasymachus5 for 🥇unroll

@tanishqkumar07 Very cool list, I applaud you sir! I love how open you are with your ideas.

@tanishqkumar07 Not available for internship, but here's a reproducible output from an epistemological integrity layer I designed for reasoning engines to validate information based on source, detecting volatility, drift, & injecting chaos/reset to prevent hallucination: https://grok.com/share/bGVnYWN...

Image in tweet by Tanishq Kumar

@tanishqkumar07 https://arxiv.org/abs/2503.148... This paper is a really nice example of “more is different” in RL. They used contrastive RL and demonstrated that scaling the *value* network *depth* leads to emergent behavior. Highly recommended.

@tanishqkumar07 I’m going to sound so smart talking to my friends.

@tanishqkumar07 @jxmnop This is a goldmine

@tanishqkumar07 This is awesome.

@tanishqkumar07 holy post

@tanishqkumar07 @grok can you simplify and eli5

@tanishqkumar07 This is so cool! We are building an NGO to connect the energetic young generation to the mentors who has more ideas than time. We are still at very developing stage, but hope we can eventually serve more people like you.

@tanishqkumar07 These seem llm generated

@tanishqkumar07 Very interesting list. I’ll definitely consider working on at least one.

@tanishqkumar07 https://zenodo.org/records/174... Enlightenment is but a click away.

@tanishqkumar07 These are some solid research angles. Your insights on power law scaling and new objectives sound particularly intriguing. Have you considered any specific experiments for the latent space overfitting hypothesis?

Share this thread

Read on Twitter

View original thread

Navigate thread

1/27