OBrienLabs Dev
Pinned Loading
Repositories
- foundation-transformer-llm Public
Training a base non-recurrent transformer model using self attention to be used as a foundation model from first principles. My base is generative pre-trained transformers in the model of the "Attention Is All You Need" paper 1706.03762 from Google
ObrienlabsDev/foundation-transformer-llm’s past year of commit activity - ComputerScience Public
General CS code - a refresh from my Carleton CS days in the 90's all the way to Stanford in 2025
ObrienlabsDev/ComputerScience’s past year of commit activity - performance Public
Systematically determine the optimized spot where we can push our hardware to the fullest possible use. Multithreaded optimization depends on multiple factors including CPU/GPU type (M4Max vs 14900 or MetalCUDA. Operations involving space-time tradeoffs like heap usage need to be fine tuned around batch sizes.
ObrienlabsDev/performance’s past year of commit activity - GenerativeQuarantine Public
Generative AII/ML under quarantine - sandbox to test deployments - no generative code in my other orgs
ObrienlabsDev/GenerativeQuarantine’s past year of commit activity
Top languages
Loading…
Most used topics
Loading…